How to troubleshoot slow SASS compile times? - compilation

Pardon the open-ended question, but I don't know how to get more specific at this point.
I've inherited a project, and compass is taking 15 seconds to compile sass in this one particular project. I have two other projects on the same computer where sass takes 1 - 1.5 seconds to compile.
What could cause sass to compile so much slower in one project compared to two others on the same computer?
I made a copy of the troublesome project and stripped it way, way down, so it's now actually much simpler than my other two projects. In so doing, I got the compile down from 15 seconds to about 2 seconds, but I should be seeing a much faster compile than 2 seconds given how tiny the project now is. The entire project now has only one scss file, and has only one #import statement...
#import "compass";
How is this project taking ~2 seconds to compile when my other much more complex projects take 1 second? There is only one file in the sass directory at this point: main.scss.

Related

CMake 3.16 orders of magnitude slower in the Generate phase for Makefiles compared to older versions

I'm consulting a company on how to speed up their builds and I immediately pointed them to precompiled headers and unity builds - the 10 minute full build could easily drop to 2-3 minutes. Luckily CMake 3.16 was recently released and it supports both of those, so I told them to upgrade.
The problem is the following: once they switched from CMake 2.6 to 3.16 the time it took to run CMake jumped from about 20 seconds to more than 10 minutes. Most of the time is spent in the generate phase. It does complete successfully if you gave it enough time and the code compiled successfully with unity builds, but this CMake time is unacceptable.
Here is their setup:
CMake 2.6, old style CMake with global flags/defines/includes - not modern (target-based). Nothing too fancy - no custom commands or build rules & complicated dependencies.
the compiler used is GCC 7 and they generate Makefiles - the OS is CentOS 7, Kernel: Linux 3.10.0-862.14.4.el7.x86_64
around 2000 .cpp files spread across 100 libraries and 600 executables (most of which are test executables with a single .cpp in them)
most .cpp files are gathered/globbed with aux_source_directory - we know not explicitly listing the .cpp files is an anti-pattern, but that's besides the point - I think this is irrelevant since this should happen in the configuration step and not during the generation, correct?
Here is what we observed:
we did a binary search through the different CMake versions and concluded that the huge slowdown happened between 3.15 and 3.16 - exactly when the precompiled header and unity build support was added, but I don't think those features have anything to do with the slowdown - I can't think of a reason for them to have such an impact - it must be some other refactoring or change...
we disabled all tests (that means almost all 600 executables were removed) - slimming down the number of CMake targets from 700 to a bit more than 100 - and the time it took to run CMake dropped significantly, but was still a couple of times longer than with CMake 2.6 for all the 700 targets.
we observed what was happening using strace and there were mostly lstat and access calls along with some reads and writes - but this was endless - seemed like hundreds of operations per second in a very repeatable manner. Also there was constantly an attempt to find libgflags.so which was constantly failing. Unfortunately I don't have such logs right now.
we did a callgrind profile and here is what it looks like after a couple of minutes of running: https://i.imgur.com/Z9AObso.png (the callgrind output file can be found here) - seems like most of the time is spent in ComputeLinkLibs() and getting the full name and definitions of targets and whatnot... Is having 700 targets too much? Is this an exponential problem? Why isn't it an issue with CMake 3.15?
I couldn't find any reports of anyone else having the same problem on the internet... Any ideas what to try next? Perhaps profile using Perf? Try with Ninja as a backend (report of that being faster)?
Great analysis. This will be helpful for the developers of CMake. But I doubt you will find much help here. Please open an issue.
Bonus points if you can provide a minimal example showcasing your problem. You might get this with generating tons of files.

Phoenix assets taking incredibly long time to compile when using Elm

My asset compilation in Phoenix is taking around 20 to 60 seconds each time, and it seems to be related to adding Elm to the project. What would cause this to happen?
I figured out this was happening because I wasn't telling brunch not to use the ES6 compiler on Elm code. It was compiling Elm code to a 10,000+ line javascript file and then trying to compile that through Babel. This is fixable by either putting the Elm code in the vendor folder (which is ignored by Babel by the default Brunch settings) or telling Babel specifically to ignore the (in my case) main.js file that is the output of the Elm code compilation.

Haxe how to speed up compilation (choosing fastest target)

I'm currently using Haxe, specifically haxeflixel for development. One thing that really bugs me is the build/compile time. I'm not even compiling to c++ target but decided to compile to neko vm as I thought it maybe faster. However; the compile time to neko debug (or release) is about 4 or 5 seconds. Having to wait this long every time I want to see a result makes it dreadfull :).
I even tried to debug with -v command and the parts that take the most time are:
Running command: BUILD
- Copying library file:
C:\HaxeToolkit\haxe\lib\lime/2,9,1/legacy/ndll/Windows/lime-legacy.ndll -> export/windows/neko/
bin/lime-legacy.ndll
- Running command: haxe export/windows/neko/haxe/release.hxml
From the above excerpt it seems like everything is behaving normally, which worries me because I do not want normal to be this slow.
Now 4 or 5 seconds might not seem a lot to some people but with the Golang, javascript , java and other super faster compiled languages out there - i'm spoiled.
Is there another target I can compile to that I dont know about which would be faster than neko vm compilation? Is there anything I can do to increase compile speed or further debug the cause of the compile slowness?
You can consider using the compilation server:
From a terminal, run haxe --wait 6000
In your hxml, add --connect 6000
This will make your build use the compilation server, which caches unchanged modules and compiles only changed modules. This will speed up your build.
Had a similar concern with running a large number of unit tests very quickly. Ended up building to JS and running the tests in node.
Pair that with gulp to build the code and process resources things can end up running pretty quick.

SASS compiler on Windows is horribly slow

I have problems compiling SASS on my Windows Environnement.
The website/files is on a mapped network drive, which causes horrible delay in compilation (more than 100 secs)
I use Grunt to compile the SASS files. I was using grunt-contrib-sass but recently switched to grunt-sass which uses libsass (c++) instead of Ruby/Gem
Even after switching from grunt-contrib-sass to grunt-sass, the compilation is slow.
I tried to compile directly with sass --watch src:dest but it's still slow.
I also tried several GUI (Koala, Scout, Prepos) but it's still slow.
I think the main issue is because I'm compiling files on the network drive.
Any ideas that could help speed it up? Maybe I should get a Mac...
If it makes you feel any better, I had the same problem on a Mac at a previous job. I think your best bet would be to actually compile on your local machine then use an automated tool to copy the new compiled file over to the appropriate place on the network drive. Since you're already using Grunt, grunt-multisync may provide a good workflow for you.

Xcode 4.5 hangs when compiling large file

I have an Xcode project I want to archive.
However, the archive takes a very long time and trows an error: Xcode can not compile a file with about 19.000 lines.
Is there a limit on the number of lines Xcode can compile?
Is there a way to actually get Xcode to compile this file?
EDIT:
I have to note that this file only contains some parsed content, i.e. is a data model which can not be split.
I know it could be extracted into a database of some sort, but the question is really about Xcode and its compiling behaviour.
I changed my project to have <19,000 lines however a similar error still persisted!
what worked for me in Xcode5 after all was to change the Build Settings: Set "optimisation level" for RELEASE (or all) to NONE. worked wonders.
Divide & Conquer
What kind of code is that in a single class/file?
You could have broken that code into modules by creating other classes/objects. That would be simpler to understand and manage.

Resources