Set OpenCL device on OSX to dedicated AMD GPU - macos

I'm in the process of setting up Torch on OSX on a 2015 macbook pro with a Radeon GPU using this library (cltorch) for OpenCL support
I can successfully run Torch scripts now, but running this test script which outputs the device and platform being used I get:
Using Apple , OpenCL platform: Apple
Using OpenCL device: Iris Pro
Obviously, I want torch to run on Radeon instead of the integrated Iris, but I have no idea how to do that.

You can use cltorch.setDevice to choose the device, like:
cltorch.setDevice(2)

Related

Using pytorch Cuda on MacBook Pro

I am using MacBook Pro (16-inch, 2019, macOS 10.15.5 (19F96))
GPU
AMD Radeon Pro 5300M
Intel UHD Graphics 630
I am trying to use Pytorch with Cuda on my mac.
All of the guides I saw assume that i have Nvidia graphic card.
I found this: https://github.com/pytorch/pytorch/issues/10657 issue, but it looks like I need to install ROCm, and according to their Supported Operating Systems, it only supports Linux.
Is it possible to run Pytorch on GPU using mac and AMD Graphic card?
No.
CUDA works only with supported NVidia GPUs, not with AMD GPUs.
There is an ongoing effort to support acceleration for AMD GPUs with PyTorch (via ROCm, which does not work on MacOS).
CUDA is a framework for GPU computing, that is developed by nVidia, for the nVidia GPUs. Also, the same goes for the CuDNN framework.
At the moment, you cannot use GPU acceleration with PyTorch with AMD GPU, i.e. without an nVidia GPU. The O.S. is not the problem, i.e. it doesn't matter that you have macOS. It is a matter of what GPU you have.
What you can do though, is that you can either purchase an external nVidia GPU or use some cluster. For example, Google Colab offers PyTorch compatibility.
PyTorch now supports training using Metal.
Announcement: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
To get started, install the latest nightly build of PyTorch: https://pytorch.org/get-started/locally/
Answer pre May 2022
Unfortunately, no GPU acceleration is available when using Pytorch on macOS. CUDA has not available on macOS for a while and it only runs on NVIDIA GPUs. AMDs equivalent library ROCm requires Linux.
If you are working with macOS 12.0 or later and would be willing to use TensorFlow instead, you can use the Mac optimized build of TensorFlow, which supports GPU training using Apple's own GPU acceleration library Metal.
Currently, you need Python 3.8 (<=3.7 and >=3.9 don't work) to run it. To install, run:
pip3 install tensorflow-macos
pip3 install tensorflow-metal
You may need to uninstall existing tensorflow distributions first or work in a virtual environment.
Then you can just
import tensorflow as tf
tf.test.is_gpu_available() # should return True
It will be possible in 4 months, around march 2022. See Soumith reply to this question on GitHub. https://github.com/pytorch/pytorch/issues/47702

Why "No OpenCL Devices found"?

I use ATI Mobility Radeon HD 545v and run Windows 7 32-bit, I have installed 13-12_win7_win8_32_dd_ccc_whql.exe and AMD-APP-SDK-v2.9-Windows-321.exe, but when I opened GUIMiner I got this No OpenCL Devices found, How to solve this?

OpenGL 3.3 Ubuntu (Virtual Machine)

I need openGL 3.3 or higher to use GLSL 3.3. My problem is that I have Mac Os X which doesn't allow to use a version higher than 2.1 of OpenGL. I've installed a virtual machine of Ubuntu inside my system but also has the version 2.1 of OpenGL. I don't understand what's going on because I have an AMD Radeon HD 6490M 256 MB which is compatible with version 4.1 of OpenGL. Is there any way that I can use openGL 3.3 or higher without doing a disk partition?
I don't understand what's going on because I have an AMD Radeon HD 6490M 256 MB which is compatible with version 4.1 of OpenGL
The GPU serves the host machine. The virtual machine sees only some dumb framebuffer device, or the OpenGL API available to the VM running on the host passed through the guest.
If you want to leverage the OpenGL-4 capabilities you must install an OS that can access the GPU natively. Also, if you want to run Linux you'll have to install the proprietary fglrx drivers (also called Catalyst for Linux), as the open source drivers that ship as distribution default haven't caught up yet.
Is there any way that I can use openGL 3.3 or higher without doing a disk partition?
Upgrade to OS X 10.9 when it comes out or grab the beta.
Or find some VM software for OS X that supports VGA passthrough.
If you're willing to repartition you can install Windows or Linux natively and use the drivers from AMD.

Offline compilation for AMD and NVIDIA OpenCL Kernels without cards installed

I was trying to figure out a way to perform offline compilation of OpenCL kernels without installing Graphics cards. I have installed the SDK's.
Does anyone has any experience with compiling OpenCL Kernels without having the graphics cards installed for both any one of them NVIDIA or AMD.
I had asked a similar question on AMD forums
(http://devgurus.amd.com/message/1284379).
NVIDIA forums for long are in accessible so couldn't get any help from there.
Thanks
AMD has an OpenCL extension for compiling binaries for devices that are not present on the system. The extension is called cl_amd_offline_devices. Pass the property CL_CONTEXT_OFFLINE_DEVICES_AMD when creating a context and all of AMDs supported devices are reported and can be used to create binaries as if they were present on the system.
Check out their OpenCL programming guide at http://developer.amd.com/tools/hc/AMDAPPSDK/assets/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guide.pdf for more info
No need to graphic card, you can compile OpenCL programs for CPU too. If you have Intel or AMD CPU this idea works. Download latest OpenCL SDK from corresponding manufacturer website and compile OpenCL program:
Intel OpenCL SDK
AMD APP

Install AMD OpenCL CPU driver with an Nvidia graphic card

I have seen this question many times but never found an answer for Windows.
I recently ported my CUDA code to OpenCL.
When testing with an ATI card, the Catalyst drivers contain a CPU OpenCL driver, hence I can run the OpenCL code on the CPU.
When testing with an NVIDIA card, there is no driver for the CPU.
Question is: how can I install (and deploy) a CPU driver when running with an Nvidia card?
Thanks a lot
To use OpenCL on CPU you don't need any driver, you only need OpenCL runtime that supports CPU, which (in case of AMD/ATI) is part of APP SDK. It could be installed no matter what GPU you have. Your end-users would also have to install the APP SDK: currently, there is no way to install OpenCL runtime only.
If you have Intel CPU, you better try Intel OpenCL SDK, which has separate installer. However, AMD APP SDK works on Intel CPUs quite well, but note vice versa.

Resources