How to test my makefile for all supported configurations? - makefile

As you know, make is dynamic and makefiles can be written to build for different environments and configurations. How can someone test such makefiles? Since its totally infeasible to explicitly run it on each supported environment/configuration?

All you can do is the best you can do; test on as many of the supposedly-supported configurations as you can.

Related

Is there a way to run feature files in parallel using cucumber-js v5.1.0?

There is currently a --parallel option that can run scenarios in parallel using cucumberjs 5.1.0. I want to find out if there is a way to run feature files in parallel instead of scenarios, with this version of cucumberjs.
Looking at the code it looks like there is only the capability to run by scenario https://github.com/cucumber/cucumber-js/blob/master/src/runtime/parallel/master.js#L105 . What is your reason for wanting to run by Feature instead of scenario?
A scenario is meant to be a completely independent test that don't rely on anything set up by the scenarios that ran before it, which is why the makers of cucumber have provided parallel functionality in the way that they have.
Hooks can be used to set up the scenarios individually (by tags or the scenario's name) or run scripts to register users and other test data needed for the tests.
In summary, the way they provide the parallel testing functionality means you are forced to stick to best practices.

How can I disable parallel build for a certain project?

Two of my projects require gigantic amounts of RAM for compiling.
I already figured out, how to make certain, that they do not build in parallel
-- by specifying build order.
Now I also want to make certain, that there are not multiple .cpp files being compiled from the same project in parallel with the one which is critical.
Anybody any clue?
But I definitely want to keep the parallel build option enabled for all the other projects.
Use the /MP argument to the build, as described here.

How to octopus deploy different versions of the dependent assembly in different environments

We have a project that can use two different versions of the certain DLL. We need this deployed in two different environments. Which version of the DLL is being used should depend on the environment.
One suggested solution is to copy the entire code-base and to create octopus deployment configurations based on those two code-bases.
I am strongly against this, but still don't have the solution to the problem.
I think that binary redirection won't work because I cannot specify the dll path in the config, and of course, I can't have those two files in the same directory.
Any ideas ?
It could be easily solved by powershell script, as an Octopus deploy step. For example, your project could have two files:
YourFile.dll
YourFile.v2.dll
Then your powershell script, post-step, (pseudocode) will look something similiar:
if($OctopusParameters["environment"] == "Dev") {
File.Delete("YourFile.dll");
File.Rename("YourFile.v2.dll", "YourFile.dll");
}
I agree though that this is quite unusual problem, and might indicate code smell.

How to find Executable References programmatically

I have a server with many executables files (.exe, .dll) but this files used to link to another executables too... and so on.
To prevent deployment errors, I need a way to search for every dependency and it's dependencies and check for any platform incompatibility (32 or 64 bits).
I have no problem to detect the binaries's platform but I don't know how to get a list of dependencies.
Any suggestion?
Check out http://www.dependencywalker.com/ - this is the best tool for seeing the module dependency hierarchy.
This question seems to get asked regularly here. I don't recommend reverse engineering your dependencies as you propose. The problem is the on different platforms the dependencies may differ. If you can you should work it out from the source. This is not as hard as it may seem at first glance.
For managed code you can use Reflection to go through all the managed dependencies, and the Dependency Walker that tenfour pointed out should help with the native. Problems will arise when you start factoring in other types of dependencies that are not as easily checked: registry keys, COM, configuration files/settings.
Your best bet would be to develop a set of acceptance tests for deploying to your server or setting up some of staging before pushing to a live server. This will help catch deployment errors as well as a number of other functional defects that might have slipped through initial testing.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources