I am working on a GPU server from my college with the computing capability less than 3.0, Windows 7 Professional, 64-bit operating system and 48GB RAM. I have tried to install tensorflow earlier but then I got to know that my GPU cannot support it.
I now want to work on keras but as tensorflow is not there so will it work or not as I am also not able to import it?
I have to do video processing and have to work on big video datasets for Dynamic Sign Language Recognition. Can anyone suggest me what I can do to get going in the field of Deep Learning with such GPU server? Or if I want to work on CPU only, then will there be any problem in this field of video processing?
I also have an HP Probook 440 G4 Laptop with Windows 10 Pro so is it better than the GPU server I have or not?
I am totally new to this field and cannot find a way to work properly in it.
Your opinions are needed right now!
The 'dxdiag' information for my laptop is shown and .
Thanks in advance!
For Keras to work you need either Tensorflow or Theano. Your Laptop seems to have a GeForce 930M GPU. This card has a compute capability or 5.0 according to the NVIDIA documentation (https://developer.nvidia.com/cuda-gpus). So you are better off with your Laptop if my research was right.
I guess you will use CNN with your video processing and therefore I would advise you to use a GPU. You can run your code also on a CPU but training will be much slower since GPUs are made for parallel computing and CPUs are not (the big matrix multiplication profit a lot from the parallel computing).
Maybe you could try a cloud computing provider if you think training is too slow on your laptop
I have a Dockerfile I made for a tensorflow object detection API pipeline, which is currently using intelpython3 and conda's tensorflow-gpu. I'm using it to run models like single-shot and faster r-cnn.
I'm curious if the hassle required to change the simple conda install tensorflow-gpu to add everything needed to build tensorflow from source into the Dockerfile would result in a worthwhile training speed increase. The server I'm currently using has: Intel Xeon E5-2687W v4 3.00 GHz, (4) Nvidia GTX 1080, 128GB RAM, SSD.
I don't need any benchmarks (unless they exist), but I really have no idea whatsoever what to expect performance-wise between the two, so any estimates would be greatly appreciated. Also if someone wouldn't mind explaining which parts of training would actually see optimizations versus using the conda tensorflow-gpu, that would be really awesome.
I upgraded my machine from WinXP to Win7, and at the same installed Lattice Diamond 3.1. My more complex simulations hang, Active-HDL uses 100% CPU time and is obviously in an infinite loop. Stupidly I don't have the installation of Lattice Diamond 2.1 or 2.2, and unbelievably Lattice only allows you to download the latest version. No fallbacks!
Does anyone have an installation file for Lattice Diamond 2.1 or at a pinch 2.2? I can provide an FTP to put it on if some has. I know its a big file, probably 1G+.
Actually I was able to just copy the Active-HDL 9.2 directory from Win7 in a virtual box on another machine, and overwrite the Active-HDL 9.4 directory. I still wouldn't mind an old installation file but at least I can simulate now. And Diamond 3.1 its actually possible to eliminate bkm warnings and errors. There were 2 many bugs in 2.1, tech support actually admitted my warnings were Diamond bugs not flaws in my code.
Actually I found you can download older version from support -> Software archive
Actual link:
http://www.latticesemi.com/en/Support/SoftwareArchive.aspx
I am planning to build a new very fast developer computer under Visual Studio 2012 and Windows 7 64 bit. I am getting all fast components like SSD's and 16G RAM. I was wondering if Visual Studio 2012 is built to utilize all available CPU cores. I am trying to make a decision whether to get an expensive 6 core I7 CPU or a lesser expensive quad core CPU in terms of whether they make a difference in compile time since that's what takes the most time when I am not coding.
Note: There's a similar post from 2009 but I wanted to know if VS2012 has much better performance than VS 2010 in terms of cores utilization.
I am balking at the $1000+ price of the I7 Extreme.
I would recommend getting a non extreme latest generation i7 with a decent SSD and double the RAM. If you trace to the file system what visual studio is actually doing, you will see that it is reading and writing a great number of files. Much of these files are cached during the second build in a row, but an SSD and enough RAM seems to be the most important speed-up component in the equation and a quad core i5 or i7 is sufficient preferably with hyper-threading and VT technology in case you want to run 64 bit virtual machines later.
I have also noticed decent compilation performance upgrades from changing an old computer from IDE to AHCI in the bios following the proper guide.
Visual Studio has the option of choosing number of maximum parallel builds - they use as many CPU cores as you wish.
However, you shouldn't really focus too much on this option when deciding which CPU to buy. Newest Intel processors use the Turbo Boost to speed up processes which are using less than maximum number of cores. Also, the Extreme Series are very expensive without much of a performance gain. The lower models can usually be overclocked and match the more expensive models.
If you really feel, however, that your compilation times are too high, you should take a look at IncrediBuild - I've used it myself and must say that it really speeds up the build process. It is able to really understand your system specification and use all the resources possible, and also use remote build servers as well.
If you are building large C++ projects you need as many cores as you can get!
For example building Qt 4.8.3 did take less than two hours on my eight core machine with a SSD drive - o an 2 core machine with an HDD it took more than 20 hours
With BlueGo you can measure how long it takes to build Qt or boost on your system - so you can use it as a benchmark to find out how well suited your system is for building large C++ projects.
Why does memory matter so much when VS 2012 is only 32 bit? Maybe if you have 10 different project open. Otherwise 8 GB should be more than enough right? Maybe ram speed is the more important, but i also read that anything over 1600MHz is a waste. I'm guessing the best thing is a SSD, and the PCI Express card SSD like OCZ makes would be the best thing.
I am using Xcode 4.2 on a relatively large project (a few ten thousand lines of code) and it is horribly slow. Editing is ok, but whenever I try to compile the project (in Xcode, or with xcodebuild on the command line), my machine (quad core i7 MacBook Pro, 4 GB RAM) crawls to a halt. I have noticed that directly after starting xcodebuild, it spawns more than 8 clang processes, without the "real" compile processes starting. No xcodebuild output is so far seen on stout. I've tried reducing the number of parallel build processes, but still lots of clang processes are launched at the beginning. The project uses 6 or 7 direct dependent external projects and has maybe 120 source files. Under Xcode 3.2 the project used to be compiled very quickly. What's happening? And how can I make Xcode fast again?
Most of us have three primary options:
Revert to Xcode 3 for daily development.
Throw more hardware at it.
Change your projects' structures and apply large scale development tricks (even though 20-30 KSLOC is not large).
The easiest solution is revert to Xc3. Yes, Xc4 requires a lot more than Xc3; memory, CPU, and disk space and I/O. You will have to determine where your biggest problems are to reduce the amount it affects you.
I recently bought a new MBP with twice the physical cores and twice the physical memory, upgraded to Lion and upgraded Xc4 at the same time. The compilation times did improve, but much of the rest is actually slower, and far more resource hungry. That's not at all what one would expect from an IDE which also disallows multiple open projects and also uses a unified workspace view.
The move to Lion + Xc4 more than doubled my hardware demands in all of the following categories:
Memory
4GB is now too little for most nontrivial projects using Xc4 and Lion. You can still reduce this. I have 8GB and 10GB on my main 2 machines, Xc4 consumes it all quite easily (but my projects are more complex than yours, unless you write reeaeaaaally long lines). Anyways, You can reduce this problem by:
Buying more memory.
Disable indexing if you are building out huge projects in Xcode. This can halve Xcode's memory consumption.
Running Xcode in 32 bit. This is not an option for everyone, because it will exceed 4 GB in larger projects.
Reduce the number of build processes (again).
Restarting Xcode often (It doesn't do a very good job cleaning up after itself).
Use clang as your compiler. Clang instances in general use less memory than Apple's GCC 4.2.
Offload dependent targets which do not change often. Example: You will not need to rebuild third party libraries daily in most cases.
CPU
Xcode 4 uses a new (more accurate) completion parser.
Pare down your include dependency graphs, and dependencies. This is quite easy to with Obj-C, since every Obj-C instance is a pointer. Example: remove loosely dependent framework includes from your headers.
Separate your dependencies and modules properly. Develop libraries, but try to make them fairly small and be aware of the dependencies they will add. Example: I lead a project where a dev added a small feature (less than 1% of the app), but due to the number of dependencies it required (e.g. Three20 and then a few more), the binary size of the final executable doubled (and the build times went up, and the parser had a lot more work to do). Most of this was not needed for the feature - they just could not be stripped because they were objc symbols.
Reduce translation counts, if possible.
Optimize how you use prefix headers. You can share them, and you can create them for no good reason. This benefits the compiler more than the IDE.
Minimize memory usage. GC work (including all NSObject allocs) consumes over 1/3 of the CPU usage in larger projects, but it still spends a lot of time collecting when its heap is huge.
Use external text editors and VC clients. Pretty obvious because Xc4 displays the SBBOD rather often in large projects.
Minimize language complexity. In order of complexity: C, ObjC, C++, ObjC++. The more complex the translations, the longer it will take to parse and compile your sources, especially when your dependencies are high. If you can easily set up language barriers in your dependencies, do so.
You can disable code sense indexing via defaults (also reduces memory demands).
Hard Disk
This can be a speed/size balance.
Buy a faster one (e.g. SSD).
Cleanup and minimize your header dependencies.
Use a RAM Disk, such as Make RAM Disk.
Buy more memory. With the amount Xc4 consumes, it ends up swapping out to disk often in large projects.
Optimize your builds to use pch files appropriately. This is not always the obvious direction: I have not used them for several years in large projects.
Clear out the temp files Xcode and Instruments leave behind, they can be huge. In some cases, you can save them in customized locations. If they do consume tens of GB and your build dir is the same as your boot dir, then you will make your disk work a lot less by cleaning them up regularly.
Builds
In Xc4, xcodebuild parallel to Xcode now doubles the work (separate build dirs by default). In Xc3, the default was to build to the same destination. Verify your settings - Xcode will do a ton of redundant building, if you allow it (e.g. one target per workspace/config, rather than Xc3's flat build model).
Or just fill a drawer with quad core MacMinis and use that as a dedicated or distributed builder.
File Bugs
(self explanatory)
One more possible solution that in some cases might help speed up Xcode 4: In my case, the main problem seems to have been that accidentally four files from my build/ folder had been checked in with my git repository. During compilation Xcode notices that the build folder changed, and triggers git. Since the build folder contains thousands of files in my case, the performance went down. Removing the build/ folder completely from git (shouldn't have been checked in anyway) reduced the compilation times and system load massively. Performance is still slower than with Xcode 3, but much better than before.
You can switch on Distributed Building in XCode Preferences, and find some friendly person who will help you build your app by forming compilation machines cluster with you.
The funny thing is that even he is off, your compiler still uses different algorithm/mechanism to build you app in a blazing speed if compared to the problems before ;)
So, that means that they at Apple have forgotten about lonely programmers who don't work in teams and therefore lonely compilation scenario is purely tested in versions 4.0 - 4.2
Quick Note Regarding 'Throw more hardware at it' approach..
SUMMARY: I experienced a SMALL speed increase from making a SIGNIFICANT hardware upgrade
Test: Build/Run the exact same project on cloned macbooks (where the only difference should be their hardware)
Old Macbook Air (1.86GHZ Core 2 Duo ONLY 2GB RAM)
vs
Brand New Macbook Pro (2.3GHZ Core i7 8GB RAM)
BUILDING ON IPHONE 3GS
Macbook Air 1:00 - 1:15
Macbook Pro ~1:00
=> 0 to 0:15 of speed increase
BUILDING ON IPHONE 4S
Macbook Pro ~0:35
Macbook Air ~0:50
=> ~15 seconds of speed increase
**Partially tested: There DOES apear to a significant difference between build times for the SIMULATOR between the 2 machines
Another culprit for slowness is plugins. The Subversions plugin was absolutely killing my Xcode performance. I followed the instructions in this SO post to disable it. WHEW!