Is there any documentation on how to install/setup Driverless AI on a High-Performance Computing (HPC) environment, so I can request few nodes (with GPU each one) and have DAI take advantage of it?
here are the available installation instructions: http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/installing.html
here's the Linux in the Cloud section: http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/install/linux-in-the-cloud.html
Related
How do I set up a backend for Deepwater (h2o) on Ubuntu 16.04? The GPUs I am using are AMD Radeon RX Vega. Anyone experienced in this topic? Do you need further information? Most explanations and procedures described here and elsewhere seem to refer to NVIDIA cards.
Deep Water is a legacy project (as of December 2017), which means that it is no longer under active development. The H2O.ai team has no current plans to add new features, however, contributions from the community (in the form of pull requests) are welcome.
Having said that, Deepwater was never built for AMD because we used Tensorflow and MXNet as backends and they did not support AMD.
I am working on a GPU server from my college with the computing capability less than 3.0, Windows 7 Professional, 64-bit operating system and 48GB RAM. I have tried to install tensorflow earlier but then I got to know that my GPU cannot support it.
I now want to work on keras but as tensorflow is not there so will it work or not as I am also not able to import it?
I have to do video processing and have to work on big video datasets for Dynamic Sign Language Recognition. Can anyone suggest me what I can do to get going in the field of Deep Learning with such GPU server? Or if I want to work on CPU only, then will there be any problem in this field of video processing?
I also have an HP Probook 440 G4 Laptop with Windows 10 Pro so is it better than the GPU server I have or not?
I am totally new to this field and cannot find a way to work properly in it.
Your opinions are needed right now!
The 'dxdiag' information for my laptop is shown and .
Thanks in advance!
For Keras to work you need either Tensorflow or Theano. Your Laptop seems to have a GeForce 930M GPU. This card has a compute capability or 5.0 according to the NVIDIA documentation (https://developer.nvidia.com/cuda-gpus). So you are better off with your Laptop if my research was right.
I guess you will use CNN with your video processing and therefore I would advise you to use a GPU. You can run your code also on a CPU but training will be much slower since GPUs are made for parallel computing and CPUs are not (the big matrix multiplication profit a lot from the parallel computing).
Maybe you could try a cloud computing provider if you think training is too slow on your laptop
I have a Dockerfile I made for a tensorflow object detection API pipeline, which is currently using intelpython3 and conda's tensorflow-gpu. I'm using it to run models like single-shot and faster r-cnn.
I'm curious if the hassle required to change the simple conda install tensorflow-gpu to add everything needed to build tensorflow from source into the Dockerfile would result in a worthwhile training speed increase. The server I'm currently using has: Intel Xeon E5-2687W v4 3.00 GHz, (4) Nvidia GTX 1080, 128GB RAM, SSD.
I don't need any benchmarks (unless they exist), but I really have no idea whatsoever what to expect performance-wise between the two, so any estimates would be greatly appreciated. Also if someone wouldn't mind explaining which parts of training would actually see optimizations versus using the conda tensorflow-gpu, that would be really awesome.
As far as I know, the answer is no. OpenCL is designed for multi-cores system.
But, is there any way to use OpenCL on multi-computers ( each computer is a multi-cores system ) ? If not, are any additional tools, frameworks... required?
I read some articles about Distributed computing, Cluster computing, Grid computing... but I can't find a satisfied answer
Any ideas will be appreciated
Thank you :)
There are two frameworks for this purpose: VirtualCL and CLara. Both packages let you work transparently with remote machines as local devices. Unfortunately, VirtualCL is only available as pre-compiled binaries without sources and CLara is not actively developed anymore.
SnuCL uses MPI and OpenCL to transparently use the cluster through the OpenCL API. It also adds a few OpenCL extensions to effectively deal with the memory objects.
It is open source. See http://aces.snu.ac.kr/Center_for_Manycore_Programming/SnuCL.html
and http://tbex.twbbs.org/~tbex/pad/SunCL.pdf
There is one more solution not mentioned above: dOpenCL.
"dOpenCL (distributed OpenCL) is a novel, uniform approach to programming distributed heterogeneous systems with accelerators. It transparently integrates the nodes of a distributed system into a single OpenCL platform. Thus, dOpenCL allows the user to run unmodified existing OpenCL applications in a heterogeneous distributed environment. Besides, it extends the OpenCL programming model to deal with individual nodes of the distributed system."
I have used VirtualCL to form a GPU cluster with 3 AMD GPU as compute node and my ubuntu intel desktop running as broker node. I was able to start both the broker and compute nodes.
In addition to the various options already mentioned by other posters, here are two more open source projects that you may be interested in:
ocland (in beta stage): offers a server application and an ICD implementation that the clients can use to take advantage of local and remote devices that support OpenCL in a transparent fashion. The license is GPLv3.
COPRTHR SDK by Brown Deer Technnology (currently version 1.6): this SDK which offers an open source (GPLv3) OpenCL implementation for x86_64, ARM, Epiphany and Intel MIC includes a "Compute Layer Remote Procedure Call" implementation. This consists of a client-side OpenCL implementation that supports rpc (libclrpc) and a server application (clrpcd). The website doesn't mention much about it but the documentation contains a section about this CLRPC implementation.
I'm working on diploma project that heavily uses mathematical calculations and should present some results in 3D. For these purposes I decided to use CUDA or OpenCL for parallel computation of mathematical part and, most possibly, OpenGL for presenting result. In addition, project should be able to be deployed on clusters (operated by MS Windows), for these purposes project supervisor recommended MPI.
My question is the following: where it is easier to combine all these components, in MS Visual tudio
Main part is CUDA + OpenCL + OpenGL, it will be the core of the project.
P.S. This question is not to star holy-war betwen Qt and MS Visual studio.
OpenCL is not limited to GPUs, it can be used for parallel programming in clusters as well. Intel for example provides a OpenCL implementation, that is aimed at multicore CPU and clusters.
So my recommendation is to use OpenCL for both GPU computing and clustering. MPI (Message Passing Interface) is mainly a way to communicate between tasks running on separate cluster nodes. It's not so much of a clustering framework by itself.