I would like to ensure that a number of PDE target platform definitions are sound and would like to run these tests in a CI loop. I am aware that the director can be used to perform installations, but I would like to use the target files directly. How is this best done?
Related
We have a build definition for our project that's set to continuous integration and which runs all the unit tests on every check in. It was all running fine until recently, when it began to reject a handful of tests that worked on developer machines.
A moment's examination revealed that this was because the tests were dependent on an external file template for which the path was specified in the app.config file. It was a network drive which developer machines had access to, but which didn't exist on the build server.
The project is a WPF application. Ideally I'm going to try and refactor the code to see if I can bypass that external dependency. But if I can't, or just to satisfy my curiosity, is there a way of editing the build itself to use or deploy a different config file?
You have a much bigger problem: Your code and your tests are dependent on some random, external, uncontrolled file. That file should be something that's source controlled and deployed along with your application.
You also have an external dependency in your tests: It has to read that file from the file system. You may want to break that dependency using standard inversion of control techniques so that tests can run in true isolation, without any need for a file to be present in the file system.
SlowCheetah should work for this use case.
Nuget.org - SlowCheetah
I am using CMake with Unix Makefile generator. When I use add_custom_target it generates the make target only in the CMAKE_CURRENT_BINARY_DIR and the CMAKE_BINARY_DIR.
I found it out by testing it with different layouts of sub-directories.
Is this somewhere documented? Is there a way to create a custom target that works in every target, similar to built-in make clean?
The rationale behind the question: I have a bunch of unit tests in several unittest folders. I don't build them with target all as compiling the tests takes much longer compared to the actual library. With the target unittest I can build them. I would like to be able to call this from every unittest subfolder. Preferably it would only build the unit tests located in the current directory and recursively in its sub-directories.
I have a ZF2 application that I've setup to be built using a Makefile with various options. The issue at hand is that the /vendors/ directory can contain an assortment of dependencies that are installed/updated via composer. Each dependency may, or may not contain unit tests, and the location of the tests are arbitrary.
When make test is run, I would like the Makefile to search through the /vendors/ directory for any folders named /tests/ and perform unit testing accordingly.
What would be the most optimal way to iterate through the vendors and locate any /tests/ directories to be able to perform unit testing?
Use the wildcard function:
LIST_OF_TESTS:=$(wildcard vendors/*/tests)
I have cross platform maven java build. I want to run the unit tests on each platforms but only need to compile/package/deploy from one platform.
For example:
checkout on windows
build-test-package-deploy
run unit-tests on other platforms using maven-repo
At present, I'm building and testing on all platform but this consumes resources and results in multiple generation of artifacts which seems wrong from a maven stand point.
How do you handle this situation?
Thanks
Peter
I currently have a cross platform build which needs to build on windows, 32bit ubuntu, 64bit ubuntu, and red hat. I develop on 32bit ubuntu and have profiles for all my target environments in the pom files. When I need to test builds I do it in a VM or yell at someone to check out a clean copy and try to build (preferred to make sure there isn't any machine-specific (env variables, local files, etc) stuff messing with it), although most of my code is fairly portable so my situation may not be super relevant for you.
I've been wondering how you go about doing CI-style testing when you're dealing with physical devices.
I imagine you have a suite of tests, and a pool of devices against which they can be run.
Additionally:
Some tests may require specific device models.
Some tests may require the use of more than one device.
What CI servers have support for this?
I'm still interested in those which have partial support, either natively or through plugins, as I'm interested in how it's done.
Continuous Integration enables a team integrate & test their work frequently. Automated builds are meant to compile, link and run unit-tests. You want your CI to run fast, especially if you run it at every check-in. That is why you would want to restrict CI activities to simple confirmation of the build and unit tests alone. What you're asking seems more-along the lines of quality assurance (QA) testing...and having QA failures mixed into your CI efforts would detract development efforts from progressing.
As such, I'm more under the impression activities associated with CI are not dependent upon the final physical machine said work may eventually be migrated to.
Now...this doesn't mean you CAN'T take the CI-compiled package and run it against some final target-machine....but again...that is really considered a seperate activity.
This seems to be re-enforced in the following article by Martin Fowler.
Notice he doesn't talk about the final targeted devices...only the build machine.
I can suggest Test Manager that is part of the TFS suite of Microsoft. I have not tried it with many different environments apart from windows based though I know there are many connectors. For windows based environments I believe it will satisfy most needs.
I use it for nightly builds to perform smoke tests (Turn it on, see if any smoke comes out) but you have to be careful to keep tests small in order to have them finished in a matter of hours and not days, if you want it to be part of your CI.
Then when you have a good enough quality you can proceed onto regression tests and integration tests if needed.
I wouldn't get too caught up in what a CI system is supposed to do or not do. Instead I would focus on the problem you are trying to solve. It sounds like that problem is to facilitate development on multiple platforms. You can use the concept of Continuous Integration and add to it successfully address the issue. I know, because I've done it in the past.
I implemented a build system for code that needed to compile and test successfully on 4 different platforms (nt, wince, linux-arm, linux-x86). The CI server would:
Used a linux and winnt build server compilation (and cross compilation)
The compiled tests and supporting libs would then be copied to the appropriate devices and an automated test run executed.
After the test suite was completed the log would be copied back, (or it was written to a network mounted fs)
If the test suite was successful we would tag the source, and package the libs and executables.
This same platform was reused for developer verification before commits. Developers would run a partial build and test (only updated source would be recompiled and those tests rerun). The CI would execute a full build (from scratch).
Our build were pretty fast because we had a proper DAG for build dependencies. This allowed for concurrent compilation within a platform build. Each platform build was also concurrent. As a result partial builds took a few seconds, full builds took ~30 minutes. Our build servers were quite beefy (optimized for fast compiles) and the codebase was of moderate size (I don't remember the stats).