Continuous test-driven development for Rust - tdd

Ruby has Autotest, JavaScript has Wallabyjs, both run test and present the results automatically on every save.
Is there any Continuous test-driven development system available for rust?
Otherwise, what is the reason for the absence? Is there a technical reason, why such a system makes no sense with rust, or did simply no one care about writing one, yet?

You can use cargo-watch.
Install it by running $ cargo install cargo-watch
In your project directory run $ cargo watch (or $ cargo watch test to be specific)
However, there are some differences to JS and Ruby: Rust is a compiled language and the compilation step takes some time. So you cannot expect immediate feedback, like you get from interpreted languages.

Related

Haxe how to speed up compilation (choosing fastest target)

I'm currently using Haxe, specifically haxeflixel for development. One thing that really bugs me is the build/compile time. I'm not even compiling to c++ target but decided to compile to neko vm as I thought it maybe faster. However; the compile time to neko debug (or release) is about 4 or 5 seconds. Having to wait this long every time I want to see a result makes it dreadfull :).
I even tried to debug with -v command and the parts that take the most time are:
Running command: BUILD
- Copying library file:
C:\HaxeToolkit\haxe\lib\lime/2,9,1/legacy/ndll/Windows/lime-legacy.ndll -> export/windows/neko/
bin/lime-legacy.ndll
- Running command: haxe export/windows/neko/haxe/release.hxml
From the above excerpt it seems like everything is behaving normally, which worries me because I do not want normal to be this slow.
Now 4 or 5 seconds might not seem a lot to some people but with the Golang, javascript , java and other super faster compiled languages out there - i'm spoiled.
Is there another target I can compile to that I dont know about which would be faster than neko vm compilation? Is there anything I can do to increase compile speed or further debug the cause of the compile slowness?
You can consider using the compilation server:
From a terminal, run haxe --wait 6000
In your hxml, add --connect 6000
This will make your build use the compilation server, which caches unchanged modules and compiles only changed modules. This will speed up your build.
Had a similar concern with running a large number of unit tests very quickly. Ended up building to JS and running the tests in node.
Pair that with gulp to build the code and process resources things can end up running pretty quick.

Is there any way to write unit tests for GNOME-Shell extensions

I am currently trying to refactor an existing gnome-shell extension's codebase. Part of that is introducing unit tests as it seems rather neglectful to not use tests in 2016.
After some tinkering I managed to setup a working node-phantomjs-qunit pipeline that actually gets me somewhere.
However, shell extensions use a custom imports-mechanic as well as
some amendments to build in classes (ex: String.format via GJS) that make it impossible to actually test those files in a isolated environment, that is: not within the shell.
So my question is: Is it really true that it is impossible to write unit tests for shell extensions?
I've done some work with unit tests with gnome shell extensions, take a look at this extension for a complete example:
https://github.com/emerinohdz/power-alt-tab
I've used webpack with babel (optional) and GJS. It is even built using Travis CI.
I've included a dumb polyfill for the GS parts I needed, and provided an alternative to handle modules, using ES6 imports instead of the default GS imports mechanism. No integration tests are possible right now, only unit tests, but at least you have control of most of your codebase.

How to install and use open source library on Windows?

I'd like to use open source library on Windows. (ex:Aquila, following http://aquila-dsp.org/articles/iteration-over-wave-file-data-revisited/) But I can't understand anything about "Build System"... Everyone just say like, "Unzip the tar, do configure, make, make file" at Linux, but I want to use them for Windows. There are some several questions.
i) Why do I have to "Install" for just source code? Why can't I use these header files by copying them to the working directory and throw #include ".\aquila\global.h" ??
ii) What are Configuration and Make/Make Install? I can't understand them. I just know that configuration open source with Windows need "CMake", and it is configuration tool... But what it actually does??
iii) Though I've done : cmake, mingw32-make, mingw32-make install... My compiler said "undefined references to ...". What this means and what should I do with them?
You don't need to install for sources. You do need to install for the libraries that get built from that source code and that your code is going to use.
configure is the standard name for the script that does build configuration for the software about to be built. The usual way it is run (and how you will see it mentioned) is ./configure.
make is a build management tool (as the tag here on SO will tell you). One of the most common mechanisms for building code on linux (etc.) is to use the autotools suite which uses the aforementioned configure script to generate build configuration information for use by generated makefiles which make then uses to build the software. make is also the way to run the default build target defined in a makefile (which is often the all target and which usually builds the appropriate library/binary/etc.).
make install is a specific, secondary, invocation of the make tool on the install target which (generally) installs the (in this case previously) built code into an appropriate location (in the autotools/configure universe the default location is generally under /usr/local).
cmake is, again as the SO tag says, a build system that generates configuration files for other build tools (make, VS, etc.). This allows the developers to create the build configuration once and build on multiple platforms/etc. (at least in theory).
If running cmake worked correctly then it should have generated the correct information for whatever target system you told it to use (make or VS or whatever). Assuming that was make that should have allowed mingw32-make to build the software correctly (assuming additionally that mingw32-make is not a distinct cmake target than make). If that is not working correctly then something is still missing from your system (and cmake probably should have caught that).
But to give any more detail you will need to give more detail about what errors you are actually getting and from what command.
(Oh, and on Windows, and especially if you plan on building your software with VS (or some other non-mingw32-make tool) the chances of you needing to run mingw32-make install are incredibly small).
For Windows use cmake or latest ninja.
The process is not simple or straight, but achievable. You need to write CMake configuration.
Building process is not simple and straight, that's why there exists language like Java(that's another thing though)
Rely on CMake build the library, and you will get the Open-Source library for Windows.
You can distribute this as library for Windows systems, distribute and integrate with your own software, include the Open Source library, in either cases, you would have to build it for Windows.
Writing CMake helps, it would be helpful to build for other platforms as well.
Now Question comes: Is there any other way except CMake for Windows Build
Would you love the flavor of writing directly Assembly?
If obviously answer is no, you would have to write CMake and generate sln for MSVC and other compilers.
Just fix some of the errors comes, read the FAQ, Documentation before building an Open Source library. And fix the errors as they lurk through.
It is like handling burning iron, but it pays if you're working on something meaningful. Most of the server libraries are Open Source(e.g. age old Apache httpd). So, think before what you're doing.
There are also not many useful Open Source libraries which you could use in your project, but it's the way to Use the Open Source libraries.

Create Ruby/Rake executable

Is there any way to create executable binaries from Ruby/Rake task?
I have simple FileUtil tool written in ruby and I'd like to package it somehow into script that can be run or OSX, Linux or Windows. Is there any way to do that?
Ruby is an interpreted language and not a compiled one like C or Java. Then answering your question is not so easy.
But there are some tools that permit you to protect your source code (encrypting) and creating some packages that are runnable cross platform (but in this case you should ever resolve any dependency).
This question covers pretty good how you can distribuite your code without (or encrypting) your source code: Can you Distribute a Ruby on Rails Application without Source?
Other useful tools that I have founded in these moments:
- Compiling a rake https://github.com/luislavena/rake-compiler
- Debian (.deb) packaging http://crohr.me/pkgr/

Perfect makefile

I'd like to use make to get a modular build in combination with continuous integration, automatic unit testing and multi-platform builds. Similar setups are common in Java and .NET, but I'm having a hard time putting this together for make and C/C++. How can it be achieved?
My requirements:
fast build; non-recursive make (Stack Overflow question What is your experience with non-recursive make?)
modular system (that is, minimal dependencies, makefile in subdirectory with components)
multiplatform (typically PC for unit testing, embedded target for system integration/release)
complete dependency checking
ability to perform (automatic) unit tests (Agile engineering)
hook into continuous integration system
easy to use
I've started with non-rec make. I still find it a great place to start.
Limitations so far:
no integration of unit tests
incompatibility of windows based ARM compilers with Cygwin paths
incompatibility of makefile with Windows \ paths
forward dependencies
My structure looks like:
project_root
/algorithm
/src
/algo1.c
/algo2.c
/unit_test
/algo1_test.c
/algo2_test.c
/out
algo1_test.exe
algo1_test.xml
algo2_test.exe
algo2_test.xml
headers.h
/embunit
/harnass
makefile
Rules.top
I'd like to keep things simple; here the unit tests (algo1_test.exe) depend on both the 'algorithm' component (ok) and the unit test framework (which may or may not be known at the time of building this). However, moving the build rules to the top make does not appeal to me as this would distribute local knowledge of components throughout the system.
As for the Cygwin paths: I'm working on making the build using relative paths. This resolves the /cygdrive/c issue (as compilers can generally handle / paths) without bringing in C: (which make dislikes). Any other ideas?
CMake together with the related tools CTest and CDash seem to answer your requirements. Worth giving it a look.
Bill Hoffman (A lead CMake developer) refers to the Recursive Make Considered Harmful paper in a post at the CMake mailing list:
... since cmake is creating the makefiles for you, many of the disadvantages
of recursive make are avoided, for example you should not have to debug
the makefiles or even think about how they work. There are other examples
of things in that paper that cmake fixes for you as well.
See also this answer for "Recursive Make - friend or foe?" here on stackoverflow.
-
Recursive Make - friend or foe?
Ok here is what I do:
I use one Makefile at the root and wildcard patterns to collect all files in a directory. Note that I assume that foo/*.c will make up foo.so for example. This makes the maintaining the Makefile minimal, since just adding a file to the directory automatically adds it to the build.
Since it is make you are using I am assuming (I do that for my projects) that a compiler is used that uses gcc (cc) compatible command line syntax. So MSC is out of order; but don't get frustrated, I do most of my development (unfortunately) on Windows and use MinGW with MSys; works like a charm. Produces native binaries, but was built with a Posix compliant build environment.
Dependency checking is done with the somewhat standard -MD switch. I then include all the *.d files into the Makefile. I build the patterns out of the automatically collected source files.
Finally unit tests are implemented with the "standard" check target. The check target is like the all target, except it depends on the unit test and executes that once everything is built. I do it this way so that you can just build the project or build the unit tests (and the rest of the project) separably. When I am not developing the project I want to just build it and be done with it.
Here is an example of how I do it: https://github.com/rioki/c9y/blob/master/Makefile
It also has the install, uninstall and dist targets.
As you can see everything is plain make, no recursive make calls and all is relatively simple. I used automake and autoconf and I will never do that again; also other build tools are out of the question, if I need to install foojam or barmake to build something, I normally ditch that project immediately.

Resources