Haxe how to speed up compilation (choosing fastest target) - compilation

I'm currently using Haxe, specifically haxeflixel for development. One thing that really bugs me is the build/compile time. I'm not even compiling to c++ target but decided to compile to neko vm as I thought it maybe faster. However; the compile time to neko debug (or release) is about 4 or 5 seconds. Having to wait this long every time I want to see a result makes it dreadfull :).
I even tried to debug with -v command and the parts that take the most time are:
Running command: BUILD
- Copying library file:
C:\HaxeToolkit\haxe\lib\lime/2,9,1/legacy/ndll/Windows/lime-legacy.ndll -> export/windows/neko/
bin/lime-legacy.ndll
- Running command: haxe export/windows/neko/haxe/release.hxml
From the above excerpt it seems like everything is behaving normally, which worries me because I do not want normal to be this slow.
Now 4 or 5 seconds might not seem a lot to some people but with the Golang, javascript , java and other super faster compiled languages out there - i'm spoiled.
Is there another target I can compile to that I dont know about which would be faster than neko vm compilation? Is there anything I can do to increase compile speed or further debug the cause of the compile slowness?

You can consider using the compilation server:
From a terminal, run haxe --wait 6000
In your hxml, add --connect 6000
This will make your build use the compilation server, which caches unchanged modules and compiles only changed modules. This will speed up your build.

Had a similar concern with running a large number of unit tests very quickly. Ended up building to JS and running the tests in node.
Pair that with gulp to build the code and process resources things can end up running pretty quick.

Related

CMake 3.16 orders of magnitude slower in the Generate phase for Makefiles compared to older versions

I'm consulting a company on how to speed up their builds and I immediately pointed them to precompiled headers and unity builds - the 10 minute full build could easily drop to 2-3 minutes. Luckily CMake 3.16 was recently released and it supports both of those, so I told them to upgrade.
The problem is the following: once they switched from CMake 2.6 to 3.16 the time it took to run CMake jumped from about 20 seconds to more than 10 minutes. Most of the time is spent in the generate phase. It does complete successfully if you gave it enough time and the code compiled successfully with unity builds, but this CMake time is unacceptable.
Here is their setup:
CMake 2.6, old style CMake with global flags/defines/includes - not modern (target-based). Nothing too fancy - no custom commands or build rules & complicated dependencies.
the compiler used is GCC 7 and they generate Makefiles - the OS is CentOS 7, Kernel: Linux 3.10.0-862.14.4.el7.x86_64
around 2000 .cpp files spread across 100 libraries and 600 executables (most of which are test executables with a single .cpp in them)
most .cpp files are gathered/globbed with aux_source_directory - we know not explicitly listing the .cpp files is an anti-pattern, but that's besides the point - I think this is irrelevant since this should happen in the configuration step and not during the generation, correct?
Here is what we observed:
we did a binary search through the different CMake versions and concluded that the huge slowdown happened between 3.15 and 3.16 - exactly when the precompiled header and unity build support was added, but I don't think those features have anything to do with the slowdown - I can't think of a reason for them to have such an impact - it must be some other refactoring or change...
we disabled all tests (that means almost all 600 executables were removed) - slimming down the number of CMake targets from 700 to a bit more than 100 - and the time it took to run CMake dropped significantly, but was still a couple of times longer than with CMake 2.6 for all the 700 targets.
we observed what was happening using strace and there were mostly lstat and access calls along with some reads and writes - but this was endless - seemed like hundreds of operations per second in a very repeatable manner. Also there was constantly an attempt to find libgflags.so which was constantly failing. Unfortunately I don't have such logs right now.
we did a callgrind profile and here is what it looks like after a couple of minutes of running: https://i.imgur.com/Z9AObso.png (the callgrind output file can be found here) - seems like most of the time is spent in ComputeLinkLibs() and getting the full name and definitions of targets and whatnot... Is having 700 targets too much? Is this an exponential problem? Why isn't it an issue with CMake 3.15?
I couldn't find any reports of anyone else having the same problem on the internet... Any ideas what to try next? Perhaps profile using Perf? Try with Ninja as a backend (report of that being faster)?
Great analysis. This will be helpful for the developers of CMake. But I doubt you will find much help here. Please open an issue.
Bonus points if you can provide a minimal example showcasing your problem. You might get this with generating tons of files.

Continuous test-driven development for Rust

Ruby has Autotest, JavaScript has Wallabyjs, both run test and present the results automatically on every save.
Is there any Continuous test-driven development system available for rust?
Otherwise, what is the reason for the absence? Is there a technical reason, why such a system makes no sense with rust, or did simply no one care about writing one, yet?
You can use cargo-watch.
Install it by running $ cargo install cargo-watch
In your project directory run $ cargo watch (or $ cargo watch test to be specific)
However, there are some differences to JS and Ruby: Rust is a compiled language and the compilation step takes some time. So you cannot expect immediate feedback, like you get from interpreted languages.

General approach for dry run of building a firmware

Static code analysis tools often watch the build process by inserting their own tool which then runs the normal build process.
Thus the compiling output in the console can be monitored by the specific tool.
Is there a simple, general way to run the compilation process without real compiling. Meaning, not a dedicated solution depending on a specific compiler but a general trick.
The idea would be to reduce the compilation time as only the output is relevant.
Remarks are welcome.
Monica

SASS compiler on Windows is horribly slow

I have problems compiling SASS on my Windows Environnement.
The website/files is on a mapped network drive, which causes horrible delay in compilation (more than 100 secs)
I use Grunt to compile the SASS files. I was using grunt-contrib-sass but recently switched to grunt-sass which uses libsass (c++) instead of Ruby/Gem
Even after switching from grunt-contrib-sass to grunt-sass, the compilation is slow.
I tried to compile directly with sass --watch src:dest but it's still slow.
I also tried several GUI (Koala, Scout, Prepos) but it's still slow.
I think the main issue is because I'm compiling files on the network drive.
Any ideas that could help speed it up? Maybe I should get a Mac...
If it makes you feel any better, I had the same problem on a Mac at a previous job. I think your best bet would be to actually compile on your local machine then use an automated tool to copy the new compiled file over to the appropriate place on the network drive. Since you're already using Grunt, grunt-multisync may provide a good workflow for you.

Perfect makefile

I'd like to use make to get a modular build in combination with continuous integration, automatic unit testing and multi-platform builds. Similar setups are common in Java and .NET, but I'm having a hard time putting this together for make and C/C++. How can it be achieved?
My requirements:
fast build; non-recursive make (Stack Overflow question What is your experience with non-recursive make?)
modular system (that is, minimal dependencies, makefile in subdirectory with components)
multiplatform (typically PC for unit testing, embedded target for system integration/release)
complete dependency checking
ability to perform (automatic) unit tests (Agile engineering)
hook into continuous integration system
easy to use
I've started with non-rec make. I still find it a great place to start.
Limitations so far:
no integration of unit tests
incompatibility of windows based ARM compilers with Cygwin paths
incompatibility of makefile with Windows \ paths
forward dependencies
My structure looks like:
project_root
/algorithm
/src
/algo1.c
/algo2.c
/unit_test
/algo1_test.c
/algo2_test.c
/out
algo1_test.exe
algo1_test.xml
algo2_test.exe
algo2_test.xml
headers.h
/embunit
/harnass
makefile
Rules.top
I'd like to keep things simple; here the unit tests (algo1_test.exe) depend on both the 'algorithm' component (ok) and the unit test framework (which may or may not be known at the time of building this). However, moving the build rules to the top make does not appeal to me as this would distribute local knowledge of components throughout the system.
As for the Cygwin paths: I'm working on making the build using relative paths. This resolves the /cygdrive/c issue (as compilers can generally handle / paths) without bringing in C: (which make dislikes). Any other ideas?
CMake together with the related tools CTest and CDash seem to answer your requirements. Worth giving it a look.
Bill Hoffman (A lead CMake developer) refers to the Recursive Make Considered Harmful paper in a post at the CMake mailing list:
... since cmake is creating the makefiles for you, many of the disadvantages
of recursive make are avoided, for example you should not have to debug
the makefiles or even think about how they work. There are other examples
of things in that paper that cmake fixes for you as well.
See also this answer for "Recursive Make - friend or foe?" here on stackoverflow.
-
Recursive Make - friend or foe?
Ok here is what I do:
I use one Makefile at the root and wildcard patterns to collect all files in a directory. Note that I assume that foo/*.c will make up foo.so for example. This makes the maintaining the Makefile minimal, since just adding a file to the directory automatically adds it to the build.
Since it is make you are using I am assuming (I do that for my projects) that a compiler is used that uses gcc (cc) compatible command line syntax. So MSC is out of order; but don't get frustrated, I do most of my development (unfortunately) on Windows and use MinGW with MSys; works like a charm. Produces native binaries, but was built with a Posix compliant build environment.
Dependency checking is done with the somewhat standard -MD switch. I then include all the *.d files into the Makefile. I build the patterns out of the automatically collected source files.
Finally unit tests are implemented with the "standard" check target. The check target is like the all target, except it depends on the unit test and executes that once everything is built. I do it this way so that you can just build the project or build the unit tests (and the rest of the project) separably. When I am not developing the project I want to just build it and be done with it.
Here is an example of how I do it: https://github.com/rioki/c9y/blob/master/Makefile
It also has the install, uninstall and dist targets.
As you can see everything is plain make, no recursive make calls and all is relatively simple. I used automake and autoconf and I will never do that again; also other build tools are out of the question, if I need to install foojam or barmake to build something, I normally ditch that project immediately.

Resources