ant conditional targets and 'recursion' - coding-style

I'm fairly new to ant, and I've seen uncle Bob's "extract until you drop" episode.
As a result I try to define ant-targets as small as possibly possible, so you can see exactly the essence of the target, and no more. For more details, you have to refer to sub-targets.
Whether that's good or bad style is a different debate (or a flame-war maybe).
Therefore, I was creating a build script that, in pseudo-code, would look like this:
build =
compile
instrument if coverage
The coverage task is split into subtargets, too:
coverage:
create-coverage-dirs
call-cobertura
EDIT- I want to express that coverage sub-targets should not be run.
But... I'm having a hard time expressing this 'cleanly' in ant-ese.
Assuming that I can use the depends attribute to indicate ... inter-target dependencies, I got to something like this:
<target name="build" depends="compile, coverage"/>
<target name="compile"> .... </target>
<target name="coverage" depends="
create-coverage-dirs,
taskdef-cobertura"
if="build.with.coverage">
<cobertura-instrument ...> ... </cobertura-instrument>
</target>
<target name="create-coverage-dirs">
...
</target>
<target name="taskdef-cobertura">
...
</target>
Whow this looked nice!
Only it seemed that, when executing, the coverage task was duefully omitted, but it's sub-tasks were still executed when build.with.coverage was false!
>ant -v compile
Build sequence for target(s) `build' is
[compile, create-coverage-dirs, taskdef- cobertura, coverage, build]
Complete build sequence is
[compile, create-coverage-dirs, taskdef-cobertura, coverage, build, ]
I can put an if attribute in every coverage sub-task, but that doesn't seem clean to me.
So here's the question:
Is my ant-ese a horrible dialect? Am I 'making ant into make'?
Should if be used this way, or is there an if-and-recurse kind-of attribute?

Repeat after me: Ant is not a programming language. In fact, write it down 100 times on the blackboard.
Ant is not a programming language, so don't think of it as such. It is a build dependency matrix.
It's difficult for programmers to wrap their heads around that idea. They want to tell Ant each step and when it should be done. They want loops, if statements. They'll resort to having a build.sh script to call various targets in Ant because you can't easily program Ant.
In Ant, you specify discrete tasks, and which tasks depend upon other tasks, and let Ant handle where and when things get executed.
What I am saying is that you don't normally split tasks into sub-tasks and then try calling <ant> or <subant> on them.
Have discrete tasks, but then let each task know what other tasks they depend upon. Also remember that there is no true order in Ant. When you list the depends= tasks, there is no guarantee which order they'll be executed in.
Standard Ant Style (which means the way I do it (aka The Right Way), and not the way my colleague does it (aka The Wrong Way)), normally states to define tasks at the top of the properties file and not in any target. Here's a basic outline on how I structure my build.xml:
<project name=...>
<!-- Build Properties File -->
<property name="build.properties.file"
value="${basedir}/build.properties"/>
<property file="${build.properties.file"/>
<!-- Base Java Properties -->
<property name="..." value="..."/>
<taskdef/>
<taskdef/>
<!-- Javac properties -->
<property name="javac..." value="..."/>
<task/>
<task/>
</project>
This creates an interesting hierarchy. If you have a file called build.properties, it will override the properties as defined in the build.xml script. For example, you have:
<property name="copy.verbose" value="false"/>
<copy todir="${target}"
verbose="${copy.verbose}">
<fileset dir="${source}"/>
</copy>
You can turn on the verbose copy by merely setting copy.verbose = true in your build.properties file. And, you can specify a different build properties file by merely specifying this on the command line:
$ ant -Dbuild.properties.file="my.build.properties"
(Yes, yes, I know there's a -propertycommand line parameter for ant)
I normally set the various values in the build.xml to the assumed defaults, but anyone can change them by creating a build.properties file. And, since all the base properties are at the beginning, they're easy to find.
Tasks are defined in this non-target space too. That way, I can easily find the definition since they're in the same place in each build.xml, and I know I can use a task without worrying whether the task defining target has been hit.
Now, to your question:
Define your tasks (and don't have a tar defining task, or you'll drive yourself crazy). Then, define the dependencies on each of those tasks. Developers can select the targets they want to hit. For example:
<project>
<description>
yadda, yadda, yadda
</description>
<taskdef name="cobertura"/>
<target name="compile"
description="Compile the code"/>
<!-- Do you have to compile code before you run Cobertura?-->
<target name="coverage"
description="Calculate test coverage"
depends="compile">
<mkdir dir="${coverage.dir}"/>
<cobertura-instrument/>
</target>
<project>
If you want to compile your code, but not run any tests, you execute ant with the compile target. If you want to run tests, you execute ant with a coverage target. There's no need for the depends= parameter.
Also notice the description= parameter and the <description> task. That's because if you do this:
$ ant -p
Ant will show what's in the <description> task, all targets with a description parameter, and that description. This way, developers know what targets to use for what tasks.
By the way, I also recommend doing things the right way (aka doing it the way I do it) and name your targets after the Maven lifecycle goals. Why? Because it was a good way to standardize on the names of targets. Developers know that clean will remove all built artifacts, and compile will run the <javac> task, and that test will run the junit tests. Thus, you should use the goals in the Cobertura plugin: cobertura.
Edit
my problem is: I regard 'coverage' as related to 'optimized' and 'debug', i.e. a build flavor. That's where my difficulty lies: for Java, coverage results in an an extra intermediate target in the compile step.
I'm looking at the Corburta page, and there's no real change in the <javac> task (which is part of the compile target.
Instead, you run Corburtura on the already built .class files, and then run your <junit> task. The big change is in your <junit> task which must now include references to your Corburtura jars, and to your instrumented classes.
I imagine you could have a corburturatarget or what ever you want to call it. This target runs the instrumented JUnit tests. This is the target you want developers to hit, and should contain a description that it runs instrumented tests.
Of course, you can't run the instrumented Junit tests without first instrumenting them. Thus, your corburtura target will depend upon another instrument.tests target. This target is internal. People who run your build.xml don't normally say "instrument tests" without running those tests. Thus, this target has no description.
Of course, the instrument.tests target depends upon having .class files to instrument, so it will have a dependency upon the compile target that runs the <javac> task:
<target name="instrument.classes"
depends="compile">
<coburtura-instrument/>
</target>
<target name="corburtura"
depends="instrument.classes"
description="Runs the JUnit tests instrumented with Corburtura">
<junit/>
</target>
The only problem is that you're specifying your <junit> target twice: Once when instrumented, and once for normal testing. This might be a minor issue. If you update how your JUnit tests run, you have to do it in two places.
If you want to solve this issue, you can use <macrodef> to define a JUnit test running Macro. I used what was on the Corbertura page to help with the outline. Completely non-tested and probably full of syntax errors:
<target name="instrument.tests"
depends="compile">
<corburtura-instrument/>
</target>
<target name="corburtura"
depends="instrument.tests"
description="Instrument and run the JUnit tests">
<run.junit.test fork.flag="true">
<systemproperty.addition>
<sysproperty key="net.sourceforge.corbertura.datafile"
file="${basedir}/cobertura.ser" />
</systemproperty.addition>
<pre.classpath>
<classpath location="${instrumented.dir}" />
</pre.classpath>
<post.classpath>
<classpath refid="cobertura_classpath" />
</post.classpath>
</run.junit.test>
</target>
<target name="test"
description="Runs the Junit tests without any instrumentation">
<run.junit.test/>
</target>
<macrodef name="run.junit.test">
<attribute name="fork.flag" default="false"/>
<element name="sysproperty.addition" optional="yes"/>
<element name="pre.classpath" optional="yes"/>
<element name="post.classpath" optional="yes"/>
<sequential>
<junit fork="#{fork.flag}" dir="${basedir}" failureProperty="test.failed">
<systemproperty.addtion/>
<pre.classpath/>
<classpath location="${classes.dir}" />
<post.classpath/>
<formatter type="xml" />
<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />
<batchtest todir="${reports.xml.dir}" unless="testcase">
<fileset dir="${src.dir}">
<include name="**/*Test.java" />
</fileset>
</batchtest>
</junit>
</sequential>
</macrodef>

I would not use a property at all in this case, but rely solely on depends (which seems more natural to me for this task):
<target name="build" depends="compile, coverage"/>
<target name="compile"> ...
<target name="coverage"
depends="compile, instrument,
create-coverage-dirs, taskdef-cobertura"> ...

The if attribute tests if the property exists, not if it is true or false. If you don't want to run the coverage target then don't define the property build.with.coverage.
As of Ant 1.8.0 you can use property expansion to resplver property as a boolean:
<target name="coverage" depends="
create-coverage-dirs,
taskdef-cobertura"
if="${build.with.coverage}">

Related

ivy dependencies describing parallel execution

I have a ivy based build, and my top level build script (simple enough) goes something like:
<target name="buildlist">
<ivy:buildlist reference="build-path">
<fileset dir="." includes="*/**/build.xml"/>
</ivy:buildlist>
</target>
<target name="all" depends="buildlist" description="build, publish and report for all projects">
<echo message="Calling 'all' on ${toString:build-path}."/>
<subant target="all" buildpathref="build-path"/>
</target>
What would be REALLY cool is to be able to parallelize the build. E.g, ivy knows dependencies and could indicate which subant calls could be run in parallel. Anyone done something like this?

Setting CultureInfo before compilation and always resetting after

I am trying to execute a task which changes the locale/culture used at compile time for a XNA content pipeline project, and restores the original after the compilation ended. The intention is to allow proper parsing of floats in non-English machines.
So far I am using BeforeBuild and AfterBuild like this:
<UsingTask TaskName="PressPlay.FFWD.BuildTasks.SetLocale" AssemblyFile="PressPlay.FFWD.BuildTasks.dll" />
<Target Name="BeforeBuild">
<SetLocale> <!-- By default, set to 'en-US' -->
<Output TaskParameter="PrevLocale" ItemName="OrigLocale" />
</SetLocale>
</Target>
<Target Name="AfterBuild">
<SetLocale Locale="#(OrigLocale)" />
</Target>
It works properly, except when an error occurs during compilation (an invalid XML or ContentSerializer error), after which the locale will not be reset.
Answers in SO are contradictory, since some say AfterBuild always executes (not in my case) and others say there's no way to ensure a target is always ran after build. I haven't found precise info regarding this around google.
I know there is the option of using PostBuildEvent and setting it to always run, but it'd use Exec to run the command and I suspect it would run in a separate thread, defeating its purpose (I set CurrentThread.CultureInfo to change the locale).
So, is there a way to ensure the target is always ran? Alternatively, is there any other way to tell VS2010 to run a compilation with a specific culture?
Links to documentation explicitly clarifying the issue would be very appreciated.
-- Final solution, following Seva's answer --
XNA's content pipeline does not declare PreBuildEvent nor PostBuildEvent. Other required properties (RunPostBuildEvent, PreBuildEventDependsOn and PostBuildEventDependsOn) aren't defined, either. However, if you define them, the content pipeline will make good use of them as in any other project.
So, the changes I had to make to the contentcsproj file were:
<!-- Added to ensure the locale is always restored -->
<PropertyGroup>
<RunPostBuildEvent>Always</RunPostBuildEvent>
</PropertyGroup>
<!-- Reference includes, project references and other stuff -->
<!-- ... -->
<Import Project="$(MSBuildExtensionsPath)\Microsoft\XNA Game Studio\$(XnaFrameworkVersion)\Microsoft.Xna.GameStudio.ContentPipeline.targets" />
<!-- Customizations to change locale before compilation and restore it after -->
<!-- Needed to properly treat dots in the XMLs as decimal separators -->
<UsingTask TaskName="PressPlay.FFWD.BuildTasks.SetLocale" AssemblyFile="PressPlay.FFWD.BuildTasks.dll" />
<!-- Apparently ContentPipeline.targets does not define PreBuildEvent and PostBuildEvent -->
<!-- However, they are still used if defined -->
<Target Name="PreBuildEvent" DependsOnTargets="$(PreBuildEventDependsOn)"/>
<Target Name="PostBuildEvent" DependsOnTargets="$(PostBuildEventDependsOn)"/>
<PropertyGroup>
<PreBuildEventDependsOn>
$(PreBuildEventDependsOn);
EstablishUSLocale
</PreBuildEventDependsOn>
</PropertyGroup>
<PropertyGroup>
<PostBuildEventDependsOn>
$(PostBuildEventDependsOn);
RestoreOriginalLocale
</PostBuildEventDependsOn>
</PropertyGroup>
<Target Name="EstablishUSLocale">
<SetLocale Locale="en-US">
<Output TaskParameter="PrevLocale" ItemName="OrigLocale" />
</SetLocale>
</Target>
<Target Name="RestoreOriginalLocale">
<SetLocale Locale="#(OrigLocale)" />
</Target>
With this solution another problem is indirectly taken care of, which is the potential issues that could arise if another project redefined BeforeBuild or AfterBuild, resulting in one of the definitions overriding the other.
You can use PostBuildEvent, because you can configure to execute it always after the build. However as you correctly noticed, using Exec task will not work here. However PostBuildEvent is actually extendable through a property called $(PostBuildEventDependsOn). You will need to define this property:
<PropertyGroup>
<PostBuildEventDependsOn>RestoreOriginalLocale</PostBuildEventDependsOn>
</PropertyGroup>
The target RestoreOriginalLocale is what you had in your AfterBuild target:
<Target Name="RestoreOriginalLocale">
<SetLocale Locale="#(OrigLocale)" />
</Target>
Your BeforeBuild target is still needed, it remains as what you wrote in your question.
To ensure PostBuildEvent is executed on failure (and thus require RestoreOriginalLocale to be executed), you need to set property RunPostBuildEvent to Always. You can do it through IDE, or by manually editing your .csproj file.

Generate Checksum for directories using Ant build command

I tried to generate the checksum for directory using ant.
I have tried the below command, but it generates recursively inside each folder for each file.
<target name="default" depends="">
<echo message="Generating Checksum for each environment" />
<checksum todir="${target.output.dir}" format="MD5SUM" >
<fileset dir="${target.output.dir}" />
</checksum>
</target>
I just want to generate one checksum for particular directory using Ant command.
How do I do that?
You want to use the totalproperty attribute. As per the documentation this property will hold a checksum of all the checksums and file paths.
e.g.
<target name="hash">
<checksum todir="x" format="MD5SUM" totalproperty="sum.of.all">
<fileset dir="x"/>
</checksum>
<echo>${sum.of.all}</echo>
</target>
Some other general notes.
This is not idempotent. Each time you run it you will get a new value because it includes the previous hash file in the new hash (and then writes a new hash file). I suggest that you change the todir attribute to point elsewhere
It's a good idea to name your targets meaningfully. See this great article by Martin Fowler for some naming ideas
You don't need the depends attribute if there's no dependency.

run MsBuild tasks (targets?) after the solution is built?

Since this question seems to have baffled / underwhelmed SO I will rephrase it with a partially formed idea of my own.
Could I somehow set up a batch file or something that runs after the whole solution is built, and this batch file would call msbuild to build specific targets inside a certain project? In order for it to work, I would have to somehow force msbuild build the target without regard to whether it thinks it's "up to date", because that is the core issue I'm butting up against.
Since you are dealing with building specifically you may want to replace your batch file with an MSBuild file. For example:
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<SolutionsToBuild Include="MySolution.sln"/>
<Projects Include="Proj1.csproj"/>
<Projects Include="Proj2.csproj"/>
<Projects Include="Proj3.csproj"/>
</ItemGroup>
<Target Name="BuildAll">
<!-- Just executes the DefaultTargets (Build) -->
<MSBuild Projects="#(SolutionsToBuild)"/>
<!-- Call Rebuild if you think its not building correctly -->
<MSBuild Projects="#(Projects)"
Targets="Rebuild"/>
</Target>
</Project>
Then you just invoke msbuild.exe on this file with:
msbuild.exe Build.proj /t:BuildAll
Since you said that you want to build specific projects after the solution is built just put those into the Projects ItemGroup as shown and use the MSBuild task to build them after the solution has been built. I've specified the Rebuild target to make sure you get a clean build.

How do I tell MSTEST to run all test projects in a Solution?

I need to know how to tell MSTEST to run all test projects in a solution file. This needs to be done from the command line. Right now I have to pass it a specific project file, I'm trying to get it to run from a SOLUTION file.
I'm hoping this is possible, because in Visual Studio, hitting Ctrl+R, A, runs ALL tests in the currently opened solution.
The way I've interpretted the help files, you have to pass in each DLL specifically.
I want to run this from the command line from my CruiseControl.NET server, so I can write other utilities to make this happen. If there is a wierd way of getting this to happen through some OTHER method, let me know.
How do I tell MSTEST to run all test projects for a solution?
<exec>
<!--MSTEST seems to want me to specify the projects to test -->
<!--I should be able to tell it a SOLUTION to test!-->
<executable>mstest.exe</executable>
<baseDirectory>C:\projects\mysolution\</baseDirectory>
<buildArgs>/testcontainer:testproject1\bin\release\TestProject1.dll
/runconfig:localtestrun.Testrunconfig
/resultsfile:C:\Results\testproject1.results.trx</buildArgs>
<buildTimeoutSeconds>600</buildTimeoutSeconds>
</exec>
To elaborate on VladV's answer and make things a bit more concrete, following the suggested naming convention running your tests can be easily be automated with MSBuild. The following snippet from the msbuild file of my current project does exactly what you asked.
<Target Name="GetTestAssemblies">
<CreateItem
Include="$(WorkingDir)\unittest\**\bin\$(Configuration)\**\*Test*.dll"
AdditionalMetadata="TestContainerPrefix=/testcontainer:">
<Output
TaskParameter="Include"
ItemName="TestAssemblies"/>
</CreateItem>
</Target>
<!-- Unit Test -->
<Target Name="Test" DependsOnTargets="GetTestAssemblies">
<Message Text="Normal Test"/>
<Exec
WorkingDirectory="$(WorkingDir)\unittest"
Command="MsTest.exe #(TestAssemblies->'%(TestContainerPrefix)%(FullPath)',' ') /noisolation /resultsfile:$(MSTestResultsFile)"/>
<Message Text="Normal Test Done"/>
</Target>
Furthermore integrating MsBuild with CruiseControl is a piece of cake.
Edit
Here's how you can 'call' msbuild from your ccnet.config.
First if you do not already use MSBuild for your build automation add the following xml around the snippet presented earlier:
<Project DefaultTargets="Build"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
..... <insert snippet here> .....
</Project>
Save this in e.g. RunTests.proj next to your solution in your source tree. Now you can modify the bit of ccnet.config above to the following:
<msbuild>
<executable>C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\MSBuild.exe</executable>
<workingDirectory>C:\projects\mysolution\</workingDirectory>
<baseDirectory>C:\projects\mysolution\</baseDirectory>
<projectFile>RunTests.proj</projectFile>
<targets>Test</targets>
<timeout>600</timeout>
<logger>C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>
</msbuild>
This is an old thread, but I have been struggling with the same issue and I realized that you can really just run MSTest on every dll in the whole solution and it doesn't really cause any problems. MSTest is looking for methods in the assemblies marked with the [TestMethod] attribute, and assemblies that aren't "test" assemblies just won't have any methods decorated with that attribute. So you get a "No tests to execute." message back and no harm done.
So for example in NAnt you can do this:
<target name="default">
<foreach item="File" property="filename">
<in>
<items>
<include name="**\bin\Release\*.dll" />
</items>
</in>
<do>
<echo message="${filename}" />
<exec program="C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe">
<arg value="/testcontainer: ${filename}" />
<arg value="/nologo" />
</exec>
</do>
</foreach>
</target>
and it will run all the test methods in every dll in every bin\Release folder in the solution. Those which are not test dlls will return a "No tests to execute." and those that have tests will have the tests run. The only part I haven't figured out yet is that (in NAnt) execution stops the first time a command returns a non-zero value. So if any unit tests fail it doesn't keep going to execute any tests in subsequent assemblies. That is not great, but if all the tests pass, then they will all run.
I just resolve this problem recently. Here is my proposal: Use testmetadata + testlist option of mstest
First you should create a testlist in testmetadata file(vsmdi)
the commandline should be mstest /testmetadata:....vsmdi /testlist:<name>
Then use ccnet config to run mstest
i know this thread is quite Old, but its still high on Google so i thought i might help one or two.
Anyway, since there is no satisfactory solution for this.
I've written an msbuild task for this.
details can be found here:
http://imistaken.blogspot.com/2010/08/running-all-tests-in-solution.html
You could enforce some convention on the naming and location of test projects, then you could run MSTest on, say, all *Test.dll below the location of your solution.
As far as I know, there is no way to tell a test project from a 'normal' DLL project based soleley on a solution file. So, an alternative could be to analyze the project files and/or .vsmdi files to find the test projects, but that could be rather tricky.
I don't know directly but this is where VSMDI [fx:spits in a corner] can help. In your solution add all the tests to the VSMDI. And then pass the VSMDI to mstest using /testmetadata.
However I would suggest that you follow the conventions above. And use a naming convention and dump that out from the SLN file using say a for loop in the command script
I would just write a target that calls it the way you want, then whip up a batch file that calls the target that contains all the DLL's to be tested.
Unless you're adding test projects all the time, you'll very rarely need to modify it.
Why not just have msbuild output all the test assemblies to a folder.
Try setting OutputPath,OutputDir,OutDir properties in msbuild to accomplish this.
then have mstest execute against all assemblies in that folder.

Resources