I am using Xcode 4.2 on a relatively large project (a few ten thousand lines of code) and it is horribly slow. Editing is ok, but whenever I try to compile the project (in Xcode, or with xcodebuild on the command line), my machine (quad core i7 MacBook Pro, 4 GB RAM) crawls to a halt. I have noticed that directly after starting xcodebuild, it spawns more than 8 clang processes, without the "real" compile processes starting. No xcodebuild output is so far seen on stout. I've tried reducing the number of parallel build processes, but still lots of clang processes are launched at the beginning. The project uses 6 or 7 direct dependent external projects and has maybe 120 source files. Under Xcode 3.2 the project used to be compiled very quickly. What's happening? And how can I make Xcode fast again?
Most of us have three primary options:
Revert to Xcode 3 for daily development.
Throw more hardware at it.
Change your projects' structures and apply large scale development tricks (even though 20-30 KSLOC is not large).
The easiest solution is revert to Xc3. Yes, Xc4 requires a lot more than Xc3; memory, CPU, and disk space and I/O. You will have to determine where your biggest problems are to reduce the amount it affects you.
I recently bought a new MBP with twice the physical cores and twice the physical memory, upgraded to Lion and upgraded Xc4 at the same time. The compilation times did improve, but much of the rest is actually slower, and far more resource hungry. That's not at all what one would expect from an IDE which also disallows multiple open projects and also uses a unified workspace view.
The move to Lion + Xc4 more than doubled my hardware demands in all of the following categories:
Memory
4GB is now too little for most nontrivial projects using Xc4 and Lion. You can still reduce this. I have 8GB and 10GB on my main 2 machines, Xc4 consumes it all quite easily (but my projects are more complex than yours, unless you write reeaeaaaally long lines). Anyways, You can reduce this problem by:
Buying more memory.
Disable indexing if you are building out huge projects in Xcode. This can halve Xcode's memory consumption.
Running Xcode in 32 bit. This is not an option for everyone, because it will exceed 4 GB in larger projects.
Reduce the number of build processes (again).
Restarting Xcode often (It doesn't do a very good job cleaning up after itself).
Use clang as your compiler. Clang instances in general use less memory than Apple's GCC 4.2.
Offload dependent targets which do not change often. Example: You will not need to rebuild third party libraries daily in most cases.
CPU
Xcode 4 uses a new (more accurate) completion parser.
Pare down your include dependency graphs, and dependencies. This is quite easy to with Obj-C, since every Obj-C instance is a pointer. Example: remove loosely dependent framework includes from your headers.
Separate your dependencies and modules properly. Develop libraries, but try to make them fairly small and be aware of the dependencies they will add. Example: I lead a project where a dev added a small feature (less than 1% of the app), but due to the number of dependencies it required (e.g. Three20 and then a few more), the binary size of the final executable doubled (and the build times went up, and the parser had a lot more work to do). Most of this was not needed for the feature - they just could not be stripped because they were objc symbols.
Reduce translation counts, if possible.
Optimize how you use prefix headers. You can share them, and you can create them for no good reason. This benefits the compiler more than the IDE.
Minimize memory usage. GC work (including all NSObject allocs) consumes over 1/3 of the CPU usage in larger projects, but it still spends a lot of time collecting when its heap is huge.
Use external text editors and VC clients. Pretty obvious because Xc4 displays the SBBOD rather often in large projects.
Minimize language complexity. In order of complexity: C, ObjC, C++, ObjC++. The more complex the translations, the longer it will take to parse and compile your sources, especially when your dependencies are high. If you can easily set up language barriers in your dependencies, do so.
You can disable code sense indexing via defaults (also reduces memory demands).
Hard Disk
This can be a speed/size balance.
Buy a faster one (e.g. SSD).
Cleanup and minimize your header dependencies.
Use a RAM Disk, such as Make RAM Disk.
Buy more memory. With the amount Xc4 consumes, it ends up swapping out to disk often in large projects.
Optimize your builds to use pch files appropriately. This is not always the obvious direction: I have not used them for several years in large projects.
Clear out the temp files Xcode and Instruments leave behind, they can be huge. In some cases, you can save them in customized locations. If they do consume tens of GB and your build dir is the same as your boot dir, then you will make your disk work a lot less by cleaning them up regularly.
Builds
In Xc4, xcodebuild parallel to Xcode now doubles the work (separate build dirs by default). In Xc3, the default was to build to the same destination. Verify your settings - Xcode will do a ton of redundant building, if you allow it (e.g. one target per workspace/config, rather than Xc3's flat build model).
Or just fill a drawer with quad core MacMinis and use that as a dedicated or distributed builder.
File Bugs
(self explanatory)
One more possible solution that in some cases might help speed up Xcode 4: In my case, the main problem seems to have been that accidentally four files from my build/ folder had been checked in with my git repository. During compilation Xcode notices that the build folder changed, and triggers git. Since the build folder contains thousands of files in my case, the performance went down. Removing the build/ folder completely from git (shouldn't have been checked in anyway) reduced the compilation times and system load massively. Performance is still slower than with Xcode 3, but much better than before.
You can switch on Distributed Building in XCode Preferences, and find some friendly person who will help you build your app by forming compilation machines cluster with you.
The funny thing is that even he is off, your compiler still uses different algorithm/mechanism to build you app in a blazing speed if compared to the problems before ;)
So, that means that they at Apple have forgotten about lonely programmers who don't work in teams and therefore lonely compilation scenario is purely tested in versions 4.0 - 4.2
Quick Note Regarding 'Throw more hardware at it' approach..
SUMMARY: I experienced a SMALL speed increase from making a SIGNIFICANT hardware upgrade
Test: Build/Run the exact same project on cloned macbooks (where the only difference should be their hardware)
Old Macbook Air (1.86GHZ Core 2 Duo ONLY 2GB RAM)
vs
Brand New Macbook Pro (2.3GHZ Core i7 8GB RAM)
BUILDING ON IPHONE 3GS
Macbook Air 1:00 - 1:15
Macbook Pro ~1:00
=> 0 to 0:15 of speed increase
BUILDING ON IPHONE 4S
Macbook Pro ~0:35
Macbook Air ~0:50
=> ~15 seconds of speed increase
**Partially tested: There DOES apear to a significant difference between build times for the SIMULATOR between the 2 machines
Another culprit for slowness is plugins. The Subversions plugin was absolutely killing my Xcode performance. I followed the instructions in this SO post to disable it. WHEW!
Related
Currently our Xamarin Android app (PCL) is huge in my opinion, even in release mode. I suspect it is due to supported architectures. Currently we have them all selected. Does anyone know if we have to select all of these? We are not using the Android NDK at all as well.
I will copy part of my answer from here.
Make sure you are at least checking the following architectures: armeabi-v7a and x86. You could do the other three but we do not since we use LLVM compiling in release mode, which is not compatible with the 64 bit architectures (except for armeabi, which is deprecated). The good thing about that is that all of the 64 bit architectures can still use 32 bit builds so they all still get covered if you check those 3.
So I would just check those 3 unless you have a specific reason to check the other ones. We have had 0 problems installing our app on devices using those 3 only.
On a side note, turning on LLVM compiling and optimizing your icons/images will help with the final APK size.
*Edit: Since writing this we ran into a bug only on certain devices (Android Nexus 9) which leads to app crashes when launching the app. The solution is to check the arm64-v8a architecture. This will probably increase app size so weigh the pros and cons and see how much of a difference it makes in your APK size after including the architecture or split your APK for each architecture if necessary.
No you do not have to select all of them. You can create an .apk per ABI if you wanted to to reduce the size of your .apk. Note: The encouraged method is that you develop and publish a single .apk. However this is not always practical, and sometimes it's better to create separate ones. Although this answer only goes into depth about different CPU Architectures (ABI), you could also create different .apk for screen size, device features, and API levels.
https://developer.xamarin.com/guides/android/advanced_topics/build-abi-specific-apks/
http://developer.android.com/google/play/publishing/multiple-apks.html
I would recommend grabbing a tool like WinDirStat(https://windirstat.info/) or Disk Inventory X(http://www.derlien.com/) to investigate why your .apk is so large. You might find other reasons why your .apk is large such as resources(images, raw files), assemblies, etc.
Is there a way or an application to test performance by making the app execute slower? I want to be sure that my app will perform well on older hardware.
Just adding stalls in SW won't necessarily imitate any older HW, it would just show you how the stalled code behaves on the new HW (and if the stalls aren't properly serializing - they may actually get avoided altogether).
If you just want to see how the code behaves without some specific ISA features you can disable them on compilation, or even compile to an older architecture. That won't make your CPU run any slower of course, but it won't be able to use for example AVX/SSE vectors (in x86 for e.g.), or other dedicated instructions.
If you want on old system+OS configuration you can use emulation - for e.g. DosBox
If you want an even higher level of realism, you can find a HW simulator that models that HW, and run on that (assuming you can cross-compile your code to run on it).
And of course, if you want an even more realistic experiment, and willing to go the extra mile, just get a specimen of that old HW, wipe the dust off, and build and run on it :)
I've got a C++ program that is notably slower on OSX 10.8.2 than on Linux. Profiling shows that the reason is that calls to free (that result from STL operations, FWIW), are much slower on OSX, because they go and call madvise, and real time gets consumed in there.
Is there any way to modulate this behavior of OS/X?
Well, yes!
I had horrible performance issues with malloc/free in Linux and started looking for a replacement.
Two options came to mind tbbmalloc (part of Intel TBB which is free BTW) and Google malloc.
After extensive testing it wasn't clear which was faster (of the two) but both were significantly faster than LIBC's implementation.
I went with tbbmalloc since it was working smoother, google malloc had a bug that caused virtual memory to be very large (reserved but not committed) which was very bad for my app (IT daemons would kill it).
The good:
Much better performance than libc's malloc. Was 3x-300x in STL heavy app.
Simple integration. No code change. Add/change 1 line the executable's makefile. No change to SOs.
The bad:
Mem checkers will not with replacements. for memchk/valgrind/etc. revert to the original malloc.
App would take 10-30% more memory.
The app I developed was a CAD application that used 10s of GBs, building and destroying 10s of millions various structures (lots of STL maps, vectors, hash_maps).
How to do this:
In the linker command, add -ltbbmalloc and make sure the library is in the lib search path (-L flag).
I am planning to build a new very fast developer computer under Visual Studio 2012 and Windows 7 64 bit. I am getting all fast components like SSD's and 16G RAM. I was wondering if Visual Studio 2012 is built to utilize all available CPU cores. I am trying to make a decision whether to get an expensive 6 core I7 CPU or a lesser expensive quad core CPU in terms of whether they make a difference in compile time since that's what takes the most time when I am not coding.
Note: There's a similar post from 2009 but I wanted to know if VS2012 has much better performance than VS 2010 in terms of cores utilization.
I am balking at the $1000+ price of the I7 Extreme.
I would recommend getting a non extreme latest generation i7 with a decent SSD and double the RAM. If you trace to the file system what visual studio is actually doing, you will see that it is reading and writing a great number of files. Much of these files are cached during the second build in a row, but an SSD and enough RAM seems to be the most important speed-up component in the equation and a quad core i5 or i7 is sufficient preferably with hyper-threading and VT technology in case you want to run 64 bit virtual machines later.
I have also noticed decent compilation performance upgrades from changing an old computer from IDE to AHCI in the bios following the proper guide.
Visual Studio has the option of choosing number of maximum parallel builds - they use as many CPU cores as you wish.
However, you shouldn't really focus too much on this option when deciding which CPU to buy. Newest Intel processors use the Turbo Boost to speed up processes which are using less than maximum number of cores. Also, the Extreme Series are very expensive without much of a performance gain. The lower models can usually be overclocked and match the more expensive models.
If you really feel, however, that your compilation times are too high, you should take a look at IncrediBuild - I've used it myself and must say that it really speeds up the build process. It is able to really understand your system specification and use all the resources possible, and also use remote build servers as well.
If you are building large C++ projects you need as many cores as you can get!
For example building Qt 4.8.3 did take less than two hours on my eight core machine with a SSD drive - o an 2 core machine with an HDD it took more than 20 hours
With BlueGo you can measure how long it takes to build Qt or boost on your system - so you can use it as a benchmark to find out how well suited your system is for building large C++ projects.
Why does memory matter so much when VS 2012 is only 32 bit? Maybe if you have 10 different project open. Otherwise 8 GB should be more than enough right? Maybe ram speed is the more important, but i also read that anything over 1600MHz is a waste. I'm guessing the best thing is a SSD, and the PCI Express card SSD like OCZ makes would be the best thing.
I am working on solution with 40+ projects interlinked together.
At the moment, the build time is ~30min, and I really want to shorten it. What would be a good place to start shortening the build time?
Some background: I don't know much about the setup of the solution, but we have a lot of linking to do (to encryption, codec libraries and between projects).
The whole size of the project is ~2.7 GB, then we are also linking it to Boost C++, Intel IPP 7
Please help pointing me in a good direction.
Thanks!
Step 1 in C++ build time reduction is more memory. After switching from 4GB to 12GB, I saw my link-all-projects time fall off a cliff: from 5:50 to 1:15.