Curious if anyone out there is doing Android Studio development on a dual Xeon machine.
I would like to know if the additional CPU gave a dramatic or visible (50% or more) boost in build performance.
You probably found out, but for others wondering: Chances are - it won't.
Did some testing with two relatively quick E2650 v4 Xeons on a largish Java + Kotlin project and Xeons were considerably slower than low core count / higher clock CPU's.
Check out the benchmarks here:
https://superuser.com/questions/1115206/will-dual-xeons-improve-android-studio-build-times/
I have tried to measure speed of Android Studio 3.1.4 on the same hardware: Macbook Pro 2011, RAM 4Gb, SSD 240GB Samsung, Core i5 2.4Ghz.
I have installed on this machine 3 different OS: Windows 10, MacOS Hight Sierra 10.13, Ubuntu 18.04.
Avarage build time (running command: gradlew clean build, gradlew clean assembleRelease) on MacOS/Ubuntu was around 30% faster than on Windows.
On my another working machine: Core i5 3.0 Ghz 7400, RAM 16Gb, SSD 250Gb. Build time takes 4.34min on Windows 10 machine.
The same project on a little bit slower processor, but with the same RAM and SSD and it is running Ubuntu 16.04 build time takes two times faster!!
Well I was shocked with results, but still I choose Windows as development machine, because it's much more comfortable for me to use comfortable and
usable keyboard and sotfware than on Unix like systems. And even if I had to choose between MacOS and Ubuntu - mac is really much easier to setup everything, and
Ubuntu is too complex to use for usual people. Choise is up to you.
Related
I am developing a .NET application, and have the luxury of doing this on a fairly powerful desktop PC. I want to ensure it runs okay on PCs with much lower spec, but I don't have spare machines kicking around and can't really afford to buy them. Is there any way to simulate a lower-spec PC on my current PC, to get a feel for how the software might run?
Any help or advice would be very much appreciated.
*My PC is Windows 7 Ultimate 64-bit with 8-core Intel i7 and 16GB RAM.
You could install VMWare and install any OS you want, with any hardware specs you want, provided that they don't exceed your current working hardware of course.
Keep in mind that VMWare is just a virtualization layer. It emulates an OS but you are still running your code on the same i7.
http://www.asp.net/mobile/device-simulators Here is an example of several Visual Studio plug-ins that emulate devices. You can also install Windows 8 and run hyper-v. It's great for this kind of thing.
I did some compilation (ACE TAO, and Boost C++ library) in vmware player virtualmachine environment. I find I was unable to tolerate the performance.
My machine is T410(i7 620, 6G memory and 5400 harddriver). The installed OS is Ubuntu 12.04 and then I installed vmware player, where the hosted OS is XP. I allocated (3G,2 core for 15G) for VMware player.
For example, for Boost, the bootstrap.bat takes about 3 hours to complete in XP while it is only several seconds in ubuntu. For ACE TAO compilation, it takes 2 days in hosted XP but only less 3 hours in Ubuntu.
I checked task manger in hosted XP. CPU usage keeps about 12%, and only 300M~ memory are really used. Since both CPU and memory are not bottleneck, the problem should be at Disk side.
Because it is not possible to repace new hard driver, I wonder if there is some guide for Vmware player performance tunning, especially for hard driver?
This is link from vmware site
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1008885
But the tools vmware-vdiskmanager in the link does not exist in my side.. How to check snapshot of my machine. Seems there is no snapshot generated in my machine.
I've been experimenting quite a bit with my new macbook pro and have run into significant performance issues with VMWare fusion as well as running natively from bootcamp.
My three scenarios are:
1) Native booting from bootcamp (16gig, SSD)
2) Native booting OSX, VMWARE fusion running from bootcamp partition (8 gig ram for vmware plus 4 of out processors)
3) Native booting OSX, VMWARE fusion running from files on native OSX partition (SSD) (8 gig ram for vmware plus 4 of out processors)
I don't have enough space to try all these at the same time but I'm suspicious that number three is significantly faster than either 1 or 2.
I've found that in both 1 and 2 (which is what I have loaded now on my computer), doing things like building large projects with visual studio 2010 boggs down, where as on my Lenovo W520 running on the same type of SSD, I don't get bogged down. I am surprised native bootcamp is any slower, but it seems to be.
Any thoughts appreciated.
Native boot has always been faster for me (understandably, with no abstraction layers)
As for the other two, there shouldn't be much difference. Windows is still running through the same abstraction layer, so if the same amount of computing power is available then they should be comparable. If one is on an SSD, that would explain some of the speed difference.
On the one running on top of OSX, does it slow down when you do a lot of processor-intensive tasks under OSX? If so, it's the SSD making the difference there - but it still shouldn't be faster than VMWare.
I've found that 3 is faster than option 2, and option 1 was by far the fastest. Natively booting to windows with bootcamp was much quicker and felt more like a normal native windows laptop. When I run vmware with a clean image (non bootcamp) it seems to run ok, however I've found the best option is as follows.
vmware settings (best option for me considering my system specs)
2 processors for windows VM
4gb RAM allocated within vmware
My system specs:
2.4ghz i7 (oct 2011 model MBP).
8gb total ram
I've found that bootcamp was slightly faster than vmware (non bootcamp image), but I still use vmware the majority of the time because I like using the host OS for things like mail/chat/browsing.
I'm running visual studio 2010 SP1 on a .net 4.0 WPF project. I use Telerik control suite, Entity framework and Oracle data access components. Our project is pretty small still but it builds in about 6 seconds after a "clean". (1 solution with 3 projects).
Pretty much as the title says, using bootcamp.
WDDM1.1 compliance and GPU recognition confirmed by the WP7 emulator running with EnableFrameRateCounters showing.
I'm considering a Macbook air as a compromise to resolve a need to access iphone dev tools and upgrade my Win7 mobile capability to something reasonably performant with one device.
My current laptop barely runs Win7 and borders on unusable for WP7 tooling hence the interest to try and solve two problems with one device - if realistic.
I assume if the device can run WP7 tools satisfactorily, it would be capable of anything else I might want to do when booted under Win7.
The new MacBook Airs do not have very powerful processors. The 11" maxes at 1.66 Ghz, while the 13" maxes at 2.13 Ghz. However, they do have the same GPU as the current 13" MacBook Pro. Also, since they use solid state drives, data access is significantly faster. Overall, it will not be the fastest computer you've used, but it should be enough to work.
I've bought one, but since it's going to the wife, I won't be able to test it in depth.
Instead, the MacbookPro 13" from '09 works fine (monoTouch+iOS dev and bootcamp to vstudio+wp7 dev). I upgraded to 4 gigs memory and that helped, also the disk is slower than I'd like. It responds like a mid-grade desktop, imo.
The problem I see is that the processor on the air's is ULV with a really slow clock, also the sdd in the base version is only 64g which is going to be cramped, I think.
Consider this: many Mac gamers install Windows with bootcamp just to have better gameplay experience.
That's because Windows have native access to the GPU through bootcamp.
http://www.mth.kcl.ac.uk/~shaww/web_page/grid/macgpu.htm (2009 article)
http://www.gpgpu.org/forums/viewtopic.php?t=3766&highlight=bootcamp (2007 article)
So the answer is yes.
I am developing an OpenGL application and I am seeing some strange things happen. The machine I am testing with is equipped with an NVidia Quadro FX 4600 and it is running RHEL WS 4.3 x86_64 (kernel 2.6.9-34.ELsmp).
I've stepped through the application with a debugger and I've noticed that it is hanging on OpenGL calls that are receiving information from the OpenGL API: i.e. - glGetError, glIsEnabled, etc. Each time it hangs up, the system is unresponsive for 3-4 seconds.
Another thing that is interesting is that if this same code is run on RHEL 4.5 (Kernel 2.6.9-67.ELsmp), it runs completely fine. The same code also runs perfectly on Windows XP. All machines are using the exact same hardware:
PNY nVidia Quadro FX4600 768mb PCI Express
Dual Intel Xeon DP Quad Core E5345 2.33hz
4096 MB 667 MHz Fully Buffered DDR2
Super Micro X7DAL-E Intel 5000X Chipset Dual Xeon Motherboard
Enermax Liberty 620 watt Power Supply
I have upgraded to the latest 64bit drivers: Version 177.82, Release Date: Nov 12, 2008 and the result is the exact same.
Does anyone have any idea what could be causing the system to hang on these OpenGL calls?
It appears that this is an issue with less-than-perfect NVidia drivers for Linux. Upgrading to a newer kernel appears to help. If I am forced to use this dated kernel, there are some things that I've tried that seem to help.
Defining the __GL_YIELD environment variable to "NOTHING" prior to starting X seems to increase stability with this older kernel.
http://us.download.nvidia.com/XFree86/Linux-x86_64/177.82/README/chapter-11.html
I've also tried disabling Triple Buffering and Flipping.
I've also found that these forums are very helpful for Linux/NVidia problems. Just do a search for "linux crash"
You may be able to dig deeper by using a system profiler like Sysprof or OProfile. Do other OpenGL applications using these calls exhibit similar behavior?