I am planning to port MATLAB algorithm into mobile devices. As part of my proposal i need to give rough estimation of the computation time required for the algorithm on mobile devices. I have figures for the algorithm on MATLAB. It would be great if someone share their experience how should get the rough number for a given mobile platform.
Update:
I am currently targeting to make proposal for the following mobile specification ,Operating System Android 5.1.1 Lollipop Display 8-inch 1920x1200 LCD Processor Tegra K1 quad-core at 2.2 GHz 192-core Kepler GPU Storage 16GB MicroSD card expandable
Related
I am working on a GPU server from my college with the computing capability less than 3.0, Windows 7 Professional, 64-bit operating system and 48GB RAM. I have tried to install tensorflow earlier but then I got to know that my GPU cannot support it.
I now want to work on keras but as tensorflow is not there so will it work or not as I am also not able to import it?
I have to do video processing and have to work on big video datasets for Dynamic Sign Language Recognition. Can anyone suggest me what I can do to get going in the field of Deep Learning with such GPU server? Or if I want to work on CPU only, then will there be any problem in this field of video processing?
I also have an HP Probook 440 G4 Laptop with Windows 10 Pro so is it better than the GPU server I have or not?
I am totally new to this field and cannot find a way to work properly in it.
Your opinions are needed right now!
The 'dxdiag' information for my laptop is shown and .
Thanks in advance!
For Keras to work you need either Tensorflow or Theano. Your Laptop seems to have a GeForce 930M GPU. This card has a compute capability or 5.0 according to the NVIDIA documentation (https://developer.nvidia.com/cuda-gpus). So you are better off with your Laptop if my research was right.
I guess you will use CNN with your video processing and therefore I would advise you to use a GPU. You can run your code also on a CPU but training will be much slower since GPUs are made for parallel computing and CPUs are not (the big matrix multiplication profit a lot from the parallel computing).
Maybe you could try a cloud computing provider if you think training is too slow on your laptop
I'm trying to do some profiling on my OpenGL ES code. Somewhere in my GPU pipeline (a shader I believe) is causing a huge delay. Which is the best profiler I can use? Is this one a good option? is there one I can use directly within Visual Studio?
If you have a GPU performance issue on IOS, the best is to use XCode tools to profile it directly on device, running the app from Xcode and then doing a frame capture to look at the timings for each draw call / the number of cycles used by each shader (more info here)
You can also profile on Windows if you are also able to simulate your graphics pipeline in classic OpenGL in your Windows version, but this may not be a good idea as the iPhone's GPU is very different than a classic desktop GPU so the bottleneck might not be the same on Windows than on IOS.
To profile on Windows I would suggest using either Nvidia PerfKit (if you have a Nvidia card) or AMD's GPU PerfStudio if you have an AMD card.
There is also RenderDoc which is a nice tool but not sure if it provides much profiling information (it is more for debugging graphics issues than profiling)
I want to try GPU programming. The GPU of my MacBook Air is Intel HD Graphics 3000, so I think I cannot use CUDA.
I did some research and ran into OpenCL. But in the homepage, it is said that
OpenCL™ (Open Computing Language) is a low-level API for heterogeneous
computing that runs on CUDA-powered GPUs.
I think my GPU is not CUDA-powered, so maybe OpenCL cannot be used either.
Then I wonder how can I do GPU programming on my MacBook Air?
By GPU Programming I am assuming you mean high performance parallel calculations on the GPU without graphics (since you mention OpenCL and CUDA).
If so this link may be helpful: https://anteru.net/2012/11/03/2009/.
And that card definitely can't use CUDA.
CUDA supported by NVIDIA only because it is NVIDIA invention. OpenCL is open standard made as an answer of AMD on NVIDIAs CUDA. OpenCL supported widely by various platforms including Nvidia, Intel and Apple.
All 2010 Macbook Pros come with two graphics cards — a low-performance built-in Intel HD one and a high-performance discrete NVIDIA one — and it switches between them on the fly depending on the needs of the running applications.
I have a simple Cocoa application that consists of just a menu bar item with a NSTextField in it. All I do is update the text field with an NSAttributedString from time to time. The trouble is that my application switches my Macbook Pro to use the high-performance NVIDIA card (I used the gfxCardStatus tool to confirm this).
What could possibly need the high-performance card? Is there a known list of reasons for the applications to require high-performance graphics card? Is there a way to force the computer to use the discrete graphics card?
There is a good article about GPU switching in the newer MacBook Pros at Ars Technica.
I noticed that OS X switches to the dedicated GPU if you
Start an application that links against OpenGL
Connect a second display
The code of gfxCardStatus is open source. And it seems that the relevant part is located in switcher.m. You can take a closer look here.
In MacOS 10.7 you can specify a setting in the PList to stop going to discrete graphics:
https://developer.apple.com/library/mac/qa/qa1734/_index.html
Needs to be a 2011+ MacBook Pro.
I am interested in a way how to read GPU temperature (graphics processing unit, main chip of graphic card), by using some video card driver API?
Everyone knows that there two different chip manufacturers (popular ones, at least) - ATI and nVIDIA - so there are two different kinds of drivers to read temperature from. I'm interested in learning how to do it for each different card driver.
Language in question is irrelevant - it could be C/C++, .NET platform, Java, but let's say that .NET is preferred.
Anyone been doing this before?
For nVidia you would use nvcpl.dll.
Here's the documentation:
http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/common/src/NvCpl/docs/NVControlPanel_API.pdf
I found this: AMD Display Library SDK (ADL for short). That covers ATI cards.
http://developer.amd.com/display-library-adl-sdk/
Link to the original page, via Wayback Machine:
http://web.archive.org/web/20101103020811/http://developer.amd.com/gpu/adlsdk/Pages/default.aspx