Keras on a GPU with less than 3.0 computing capability? - windows

I am working on a GPU server from my college with the computing capability less than 3.0, Windows 7 Professional, 64-bit operating system and 48GB RAM. I have tried to install tensorflow earlier but then I got to know that my GPU cannot support it.
I now want to work on keras but as tensorflow is not there so will it work or not as I am also not able to import it?
I have to do video processing and have to work on big video datasets for Dynamic Sign Language Recognition. Can anyone suggest me what I can do to get going in the field of Deep Learning with such GPU server? Or if I want to work on CPU only, then will there be any problem in this field of video processing?
I also have an HP Probook 440 G4 Laptop with Windows 10 Pro so is it better than the GPU server I have or not?
I am totally new to this field and cannot find a way to work properly in it.
Your opinions are needed right now!
The 'dxdiag' information for my laptop is shown and .
Thanks in advance!

For Keras to work you need either Tensorflow or Theano. Your Laptop seems to have a GeForce 930M GPU. This card has a compute capability or 5.0 according to the NVIDIA documentation (https://developer.nvidia.com/cuda-gpus). So you are better off with your Laptop if my research was right.
I guess you will use CNN with your video processing and therefore I would advise you to use a GPU. You can run your code also on a CPU but training will be much slower since GPUs are made for parallel computing and CPUs are not (the big matrix multiplication profit a lot from the parallel computing).
Maybe you could try a cloud computing provider if you think training is too slow on your laptop

Related

Expected performance increase building tensorflow-gpu from source

I have a Dockerfile I made for a tensorflow object detection API pipeline, which is currently using intelpython3 and conda's tensorflow-gpu. I'm using it to run models like single-shot and faster r-cnn.
I'm curious if the hassle required to change the simple conda install tensorflow-gpu to add everything needed to build tensorflow from source into the Dockerfile would result in a worthwhile training speed increase. The server I'm currently using has: Intel Xeon E5-2687W v4 3.00 GHz, (4) Nvidia GTX 1080, 128GB RAM, SSD.
I don't need any benchmarks (unless they exist), but I really have no idea whatsoever what to expect performance-wise between the two, so any estimates would be greatly appreciated. Also if someone wouldn't mind explaining which parts of training would actually see optimizations versus using the conda tensorflow-gpu, that would be really awesome.

How can I use TensorFlow on Windows with AMD GPU?

I want to use TensorFlow on Windows (Win 10) with a AMD GPU.
If I google, there are a lot discussions and sources but I just couldn't figure out what's the best way to do this at the moment.
Could someone write a short installation instruction that he thinks is the best and most up-to-date way of doing so?
Tensorflow officially only supports CUDA, which is a proprietary NVIDIA technology. There is one unofficial implementation using openCL here which could work, or you could try using Google colab

Rough performance estimation for the code Matlab to c++

I am planning to port MATLAB algorithm into mobile devices. As part of my proposal i need to give rough estimation of the computation time required for the algorithm on mobile devices. I have figures for the algorithm on MATLAB. It would be great if someone share their experience how should get the rough number for a given mobile platform.
Update:
I am currently targeting to make proposal for the following mobile specification ,Operating System Android 5.1.1 Lollipop Display 8-inch 1920x1200 LCD Processor Tegra K1 quad-core at 2.2 GHz 192-core Kepler GPU Storage 16GB MicroSD card expandable

Raspberry Pi Embedded application

I am developing a computer vision system to control orientation of two mirrors to track stimuli in field of view.We are sending coordinates to motor over network and trying to track as smoothly as possible.
I have two questions regarding this :
1.Is Python suitable for this kind of project . I have already coded it in Python and find it very easy to use.
I am running Raspbian on raspberry Pi but found that it's not a real time os. We are sending command every 20 ms to the server built on raspberry Pi. Should I switch to arduino or patch the Linux kernel for this application.
Python, combined with OpenCV, is one of the best candidates for this task.
As mentioned in the comment above, the "real-time" issue is OS related. I personally recommend an Arduino-based solution, even though that puts more burden on the hardware design. You could also check the new IoT solutions from Intel, they have a wide range of boards.

Windows CE OpenCV performance

Im using OpenCV on Windows CE 6.0 R2 on and the performance is quite weak. I can do 300 YUV to RGB conversions per second (using my code), but OpenCV takes 3 seconds to perform a single cvGoodFeaturesToTrack() on a VGA image. I know OpenCV uses STL a lot, does anyone have experience with STL on Windows CE?
Thanks,
Filip
cvGoodFeaturesToTrack() is a heavy function that uses a lot of floating point operations. If your platform does not support FP ops that would explain what your are seeing.
Try using the FAST features which are, well, FAST.

Resources