TeamCity: Scan all files for text - teamcity

I currently use CC.NET to run an NAnt build file. In the NAnt script, I use the grep task to scan for TODO/BUG/HACK comments, and that report gets folded into the main build report. I'd like to know if that is something already built into TeamCity someway?
Or should I just create another build step to run the same NAnt script. If that is the case, where do I dump the results of that scan and how do I then pull that XML dump into the TeamCity build results? This is what my NAnt target looks like:
<target name="todoScan" description="Generate report on TODO items remaining in code">
<grep output="${base.report.dir}\${projectname}_todoscan.xml" pattern="(?'Type'TODO|BUG|HACK): (?'Text'[^\n\r]*)">
<fileset basedir="${projectdir}">
<include name="**\*.vb" />
<include name="**\*.js" />
<include name="**\*.aspx" />
<include name="**\*.ascx" />
<exclude name="**\*-vsdoc.js" />
<exclude name="**\jquery-1.3.2.js" />
</fileset>
</grep>
</target>

I am not aware of any built-in TeamCity functionality that will perform that operation.
As long as you write the file to an accessible directory you can include it in the artifacts published using the "Artifact paths" field under "1. General Settings". The file will then be accessible from the artifacts tab on the dashboard.
If you like you can then add a new tab to the dashboard that will display your file on each build if you go to "Administration", "Server Configuration", "Report Tabs" and click "Create a new report tab".

I was actually in the same situation, coming from Jenkins where I used a plugin to show things like IDEA/TODO/MUDO. Since I also moved to TeamCity recently, I made a plugin for this. It's very new and very basic, but does what it needs to do for me. If you're interested, it's available on GitHub: Todo TeamCity plugin.

Related

Getting mstest results to display in CruiseControl.Net Build Report

I am trying to setup CruiseControl.Net to display results from mstest unit tests (msbuild / Visual Studio 2010) in the build report. I am merging the results and they do show up properly in the build log, but not in the build report. I have also tried just dumping some text in the MsTestSummary2008.xsl and that text does show in the build report, so the xsl does seem to be included properly.
Do I need a new/different .xsl for VS2010? Thanks in advance.
ccnet.config:
<publishers>
<merge>
<files>
<file>C:\_Projects\test\code\results\results1.trx</file>
</files>
</merge>
<xmllogger />
<statistics />
</publishers>
dashboard.config:
<buildPlugins>
<buildReportBuildPlugin>
<xslFileNames>
<xslFile>xsl\header.xsl</xslFile>
<xslFile>xsl\modifications.xsl</xslFile>
<xslFile>xsl\MsTestSummary2008.xsl</xslFile>
</xslFileNames>
</buildReportBuildPlugin>
<buildLogBuildPlugin />
<xslReportBuildPlugin description="MSTest Report" actionName="MSTESTReport" xslFileName="xsl\MsTestReport2008.xsl" />
</buildPlugins>
Perhaps the modified XSL mentioned in this thread will solve your problem.

ant conditional targets and 'recursion'

I'm fairly new to ant, and I've seen uncle Bob's "extract until you drop" episode.
As a result I try to define ant-targets as small as possibly possible, so you can see exactly the essence of the target, and no more. For more details, you have to refer to sub-targets.
Whether that's good or bad style is a different debate (or a flame-war maybe).
Therefore, I was creating a build script that, in pseudo-code, would look like this:
build =
compile
instrument if coverage
The coverage task is split into subtargets, too:
coverage:
create-coverage-dirs
call-cobertura
EDIT- I want to express that coverage sub-targets should not be run.
But... I'm having a hard time expressing this 'cleanly' in ant-ese.
Assuming that I can use the depends attribute to indicate ... inter-target dependencies, I got to something like this:
<target name="build" depends="compile, coverage"/>
<target name="compile"> .... </target>
<target name="coverage" depends="
create-coverage-dirs,
taskdef-cobertura"
if="build.with.coverage">
<cobertura-instrument ...> ... </cobertura-instrument>
</target>
<target name="create-coverage-dirs">
...
</target>
<target name="taskdef-cobertura">
...
</target>
Whow this looked nice!
Only it seemed that, when executing, the coverage task was duefully omitted, but it's sub-tasks were still executed when build.with.coverage was false!
>ant -v compile
Build sequence for target(s) `build' is
[compile, create-coverage-dirs, taskdef- cobertura, coverage, build]
Complete build sequence is
[compile, create-coverage-dirs, taskdef-cobertura, coverage, build, ]
I can put an if attribute in every coverage sub-task, but that doesn't seem clean to me.
So here's the question:
Is my ant-ese a horrible dialect? Am I 'making ant into make'?
Should if be used this way, or is there an if-and-recurse kind-of attribute?
Repeat after me: Ant is not a programming language. In fact, write it down 100 times on the blackboard.
Ant is not a programming language, so don't think of it as such. It is a build dependency matrix.
It's difficult for programmers to wrap their heads around that idea. They want to tell Ant each step and when it should be done. They want loops, if statements. They'll resort to having a build.sh script to call various targets in Ant because you can't easily program Ant.
In Ant, you specify discrete tasks, and which tasks depend upon other tasks, and let Ant handle where and when things get executed.
What I am saying is that you don't normally split tasks into sub-tasks and then try calling <ant> or <subant> on them.
Have discrete tasks, but then let each task know what other tasks they depend upon. Also remember that there is no true order in Ant. When you list the depends= tasks, there is no guarantee which order they'll be executed in.
Standard Ant Style (which means the way I do it (aka The Right Way), and not the way my colleague does it (aka The Wrong Way)), normally states to define tasks at the top of the properties file and not in any target. Here's a basic outline on how I structure my build.xml:
<project name=...>
<!-- Build Properties File -->
<property name="build.properties.file"
value="${basedir}/build.properties"/>
<property file="${build.properties.file"/>
<!-- Base Java Properties -->
<property name="..." value="..."/>
<taskdef/>
<taskdef/>
<!-- Javac properties -->
<property name="javac..." value="..."/>
<task/>
<task/>
</project>
This creates an interesting hierarchy. If you have a file called build.properties, it will override the properties as defined in the build.xml script. For example, you have:
<property name="copy.verbose" value="false"/>
<copy todir="${target}"
verbose="${copy.verbose}">
<fileset dir="${source}"/>
</copy>
You can turn on the verbose copy by merely setting copy.verbose = true in your build.properties file. And, you can specify a different build properties file by merely specifying this on the command line:
$ ant -Dbuild.properties.file="my.build.properties"
(Yes, yes, I know there's a -propertycommand line parameter for ant)
I normally set the various values in the build.xml to the assumed defaults, but anyone can change them by creating a build.properties file. And, since all the base properties are at the beginning, they're easy to find.
Tasks are defined in this non-target space too. That way, I can easily find the definition since they're in the same place in each build.xml, and I know I can use a task without worrying whether the task defining target has been hit.
Now, to your question:
Define your tasks (and don't have a tar defining task, or you'll drive yourself crazy). Then, define the dependencies on each of those tasks. Developers can select the targets they want to hit. For example:
<project>
<description>
yadda, yadda, yadda
</description>
<taskdef name="cobertura"/>
<target name="compile"
description="Compile the code"/>
<!-- Do you have to compile code before you run Cobertura?-->
<target name="coverage"
description="Calculate test coverage"
depends="compile">
<mkdir dir="${coverage.dir}"/>
<cobertura-instrument/>
</target>
<project>
If you want to compile your code, but not run any tests, you execute ant with the compile target. If you want to run tests, you execute ant with a coverage target. There's no need for the depends= parameter.
Also notice the description= parameter and the <description> task. That's because if you do this:
$ ant -p
Ant will show what's in the <description> task, all targets with a description parameter, and that description. This way, developers know what targets to use for what tasks.
By the way, I also recommend doing things the right way (aka doing it the way I do it) and name your targets after the Maven lifecycle goals. Why? Because it was a good way to standardize on the names of targets. Developers know that clean will remove all built artifacts, and compile will run the <javac> task, and that test will run the junit tests. Thus, you should use the goals in the Cobertura plugin: cobertura.
Edit
my problem is: I regard 'coverage' as related to 'optimized' and 'debug', i.e. a build flavor. That's where my difficulty lies: for Java, coverage results in an an extra intermediate target in the compile step.
I'm looking at the Corburta page, and there's no real change in the <javac> task (which is part of the compile target.
Instead, you run Corburtura on the already built .class files, and then run your <junit> task. The big change is in your <junit> task which must now include references to your Corburtura jars, and to your instrumented classes.
I imagine you could have a corburturatarget or what ever you want to call it. This target runs the instrumented JUnit tests. This is the target you want developers to hit, and should contain a description that it runs instrumented tests.
Of course, you can't run the instrumented Junit tests without first instrumenting them. Thus, your corburtura target will depend upon another instrument.tests target. This target is internal. People who run your build.xml don't normally say "instrument tests" without running those tests. Thus, this target has no description.
Of course, the instrument.tests target depends upon having .class files to instrument, so it will have a dependency upon the compile target that runs the <javac> task:
<target name="instrument.classes"
depends="compile">
<coburtura-instrument/>
</target>
<target name="corburtura"
depends="instrument.classes"
description="Runs the JUnit tests instrumented with Corburtura">
<junit/>
</target>
The only problem is that you're specifying your <junit> target twice: Once when instrumented, and once for normal testing. This might be a minor issue. If you update how your JUnit tests run, you have to do it in two places.
If you want to solve this issue, you can use <macrodef> to define a JUnit test running Macro. I used what was on the Corbertura page to help with the outline. Completely non-tested and probably full of syntax errors:
<target name="instrument.tests"
depends="compile">
<corburtura-instrument/>
</target>
<target name="corburtura"
depends="instrument.tests"
description="Instrument and run the JUnit tests">
<run.junit.test fork.flag="true">
<systemproperty.addition>
<sysproperty key="net.sourceforge.corbertura.datafile"
file="${basedir}/cobertura.ser" />
</systemproperty.addition>
<pre.classpath>
<classpath location="${instrumented.dir}" />
</pre.classpath>
<post.classpath>
<classpath refid="cobertura_classpath" />
</post.classpath>
</run.junit.test>
</target>
<target name="test"
description="Runs the Junit tests without any instrumentation">
<run.junit.test/>
</target>
<macrodef name="run.junit.test">
<attribute name="fork.flag" default="false"/>
<element name="sysproperty.addition" optional="yes"/>
<element name="pre.classpath" optional="yes"/>
<element name="post.classpath" optional="yes"/>
<sequential>
<junit fork="#{fork.flag}" dir="${basedir}" failureProperty="test.failed">
<systemproperty.addtion/>
<pre.classpath/>
<classpath location="${classes.dir}" />
<post.classpath/>
<formatter type="xml" />
<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />
<batchtest todir="${reports.xml.dir}" unless="testcase">
<fileset dir="${src.dir}">
<include name="**/*Test.java" />
</fileset>
</batchtest>
</junit>
</sequential>
</macrodef>
I would not use a property at all in this case, but rely solely on depends (which seems more natural to me for this task):
<target name="build" depends="compile, coverage"/>
<target name="compile"> ...
<target name="coverage"
depends="compile, instrument,
create-coverage-dirs, taskdef-cobertura"> ...
The if attribute tests if the property exists, not if it is true or false. If you don't want to run the coverage target then don't define the property build.with.coverage.
As of Ant 1.8.0 you can use property expansion to resplver property as a boolean:
<target name="coverage" depends="
create-coverage-dirs,
taskdef-cobertura"
if="${build.with.coverage}">

How to use StyleCop with TeamCity

Has anyone had any success with running StyleCop from TeamCity?
I know StyleCop supports a command line mode, however i am not sure how this will integrate into the report output by TeamCity.
I've checked out this plugin found here: https://bitbucket.org/metaman/teamcitydotnetcontrib/src/753712db5df7/stylecop/
However could not get it running.
I am using TeamCity 6.5.1 (latest).
I don't know how familiar you are with MSBuild, but you should be able to add a new Build Step in TC 6 and above, and set MSBuild as the build runner, and point it to a .proj file which does something similar to the following:
<Target Name="StyleCop">
<!-- Create a collection of files to scan -->
<CreateItem Include="$(SourceFolder)\**\*.cs">
<Output TaskParameter="Include" ItemName="StyleCopFiles" />
</CreateItem>
<StyleCopTask
ProjectFullPath="$(MSBuildProjectFile)"
SourceFiles="#(StyleCopFiles)"
ForceFullAnalysis="true"
TreatErrorsAsWarnings="true"
OutputFile="StyleCopReport.xml"
CacheResults="true" />
<Xslt Inputs="StyleCopReport.xml"
RootTag="StyleCopViolations"
Xsl="tools\StyleCop\StyleCopReport.xsl"
Output="StyleCopReport.html" />
<XmlRead XPath="count(//Violation)" XmlFileName="StyleCopReport.xml">
<Output TaskParameter="Value" PropertyName="StyleCopViolations" />
</XmlRead>
<Error Condition="$(StyleCopViolations) > 0" Text="StyleCop found $(StyleCopViolations) broken rules!" />
</Target>
If you don't want to fail the build on a StyleCop error, then set the Error task to be Warning instead.
You'll also need to add the following to your .proj file:
<UsingTask TaskName="StyleCopTask" AssemblyFile="$(StyleCopTasksPath)\Microsoft.StyleCop.dll" />
Microsoft.StyleCop.dll is included in the StyleCop installation, and you'll need to set your paths appropriately.
To see the outputted StyleCop results in TeamCity, you will need to transform the .xml StyleCop report to HTML using an appropriate .xsl file (called StyleCopReport.xsl in the script above).
To display the HTML file in TeamCity, you'll need to create an artifact from this .html output, and then include that artifact in the build results.
The Continuous Integration in .NET book is a great resource.
Did you know that teamcity provides specific properties just from msbuild?
No need for the service messages, see:
http://confluence.jetbrains.net/display/TCD65/MSBuild+Service+Tasks
So you dont have to add a custom report page.
Use the build stats e.g.
<TeamCitySetStatus Status="$(AllPassed)" Text="Violations: $(StyleCopViolations)" />
you can then log the statistic too:
<TeamCityReportStatsValue Key="StyleCopViolations" Value="$(StyleCopViolations)" />
And then create a custom graph to display, and you already have the violations in your msbuild output.
edit main-config.xml and add:
<graph title="Style Violations" seriesTitle="Warning">
<valueType key="StyleCopViolations" title="Violations" buildTypeId="bt20"/>
</graph>
Where buildTypeId="bt20" bt20 is your style build.
I'm late to the show here but a very easy way to achieve this is to install the StyleCop.MSBuild NuGet package in any project which you want to analyse with StyleCop.
After installing the package, StyleCop analysis will run on every build you do, regardless of where or how it is invoked, e.g VS, command line, msbuild, psake, rake, fake, bake, nant, build server, etc. No special actions are required.
If you want the build to fail when StyleCop rules are broken you just need to add the following element to your project file under each appropriate build configuration, E.g.
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>
...
Again, this will work on every build, regardless of where and how it is invoked.
There's a (new?) third-party TeamCity plugin for StyleCop here, (though I haven't tried it yet).
UPDATE: as far as I can tell, the latest version only works with TeamCity 7 (or I did something wrong). Also, I have a very slow (virtual) build server, so even after the services were restarted, it took a while for the StyleCop runner to appear in the web interface.
Another stupid thing I did was not read the readme properly: you have to unzip the downloaded zip, and use the zip inside.
I also originally started with just a list of .cs files in the "Include" option (for the build step), but that didn't work; links to sln or csproj files do work though.

VS2010 Web Publish command line version of File System deploy

Folks,
In a nutshell, I want to replicate this dialog:
It's a Visual Studio 2010 ASP.Net MVC project. If I execute this command, I get all the files I want, including the transformed web.configs in the "C:\ToDeploy" directory.
I want to replicate this on the command line so I can use it for a QA environment build.
I've seen various articles on how to do this on the command line for Remote Deploys, but I just want to do it for File System deploys.
I know I could replicate this functionality using nAnt tasks or rake scripts, but I want to do it using this mechanism so I'm not repeating myself.
I've investigated this some more, and I've found these links, but none of them solve it cleanly:
VS 2008 version, but no Web.Config transforms
Creates package, but doesn't deploy it..do I need to use MSDeploy on this package?
Deploys package after creating it above...does the UI really do this 2 step tango?
Thanks in advance!
Ok, finally figured this out.
The command line you need is:
msbuild path/to/your/webdirectory/YourWeb.csproj /p:Configuration=Debug;DeployOnBuild=True;PackageAsSingleFile=False
You can change where the project outputs to by adding a property of outdir=c:\wherever\ in the /p: section.
This will create the output at:
path/to/your/webdirectory/obj/Debug/Package/PackageTmp/
You can then copy those files from the above directory using whatever method you'd like.
I've got this all working as a ruby rake task using Albacore. I am trying to get it all done so I can actually put it as a contribution to the project. But if anyone wants the code before that, let me know.
Another wrinkle I found was that it was putting in Tokenized Parameters into the Web.config. If you don't need that feature, make sure you add:
/p:AutoParameterizationWebConfigConnectionStrings=false
I thought I'd post a another solution that I found, I've updated this solution to include a log file.
This is similar to Publish a Web Application from the Command Line, but just cleaned up and added log file. also check out original source http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild
Create an MSBuild_publish_site.bat (name it whatever) in the root of your web application project
set msBuildDir=%WINDIR%\Microsoft.NET\Framework\v4.0.30319
set destPath=C:\Publish\MyWebBasedApp\
:: clear existing publish folder
RD /S /Q "%destPath%"
call %msBuildDir%\msbuild.exe MyWebBasedApp.csproj "/p:Configuration=Debug;PublishDestination=%destPath%;AutoParameterizationWebConfigConnectionStrings=False" /t:PublishToFileSystem /l:FileLogger,Microsoft.Build.Engine;logfile=Manual_MSBuild_Publish_LOG.log
set msBuildDir=
set destPath=
Update your Web Application project file MyWebBasedApp.csproj by adding the following xml under the <Import Project= tag
<Target Name="PublishToFileSystem" DependsOnTargets="PipelinePreDeployCopyAllFilesToOneFolder">
<Error Condition="'$(PublishDestination)'==''" Text="The PublishDestination property must be set to the intended publishing destination." />
<MakeDir Condition="!Exists($(PublishDestination))" Directories="$(PublishDestination)" />
<ItemGroup>
<PublishFiles Include="$(_PackageTempDir)\**\*.*" />
</ItemGroup>
<Copy SourceFiles="#(PublishFiles)" DestinationFiles="#(PublishFiles->'$(PublishDestination)\%(RecursiveDir)%(Filename)%(Extension)')" SkipUnchangedFiles="True" />
</Target>
this works better for me than other solutions.
Check out the following for more info:
1) http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild
2) Publish a Web Application from the Command Line
3) Build Visual Studio project through the command line
My solution for CCNET with the Web.config transformation:
<tasks>
<msbuild>
<executable>C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe</executable>
<workingDirectory>E:\VersionesCC\Trunk_4\SBatz\Gertakariak_Orokorrak\GertakariakMS\Web</workingDirectory>
<projectFile>GertakariakMSWeb2.vbproj</projectFile>
<targets>Build</targets>
<timeout>600</timeout>
<logger>C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MSBuild.dll</logger>
<buildArgs>
/noconsolelogger /p:Configuration=Release /v:diag
/p:DeployOnBuild=true
/p:AutoParameterizationWebConfigConnectionStrings=false
/p:DeployTarget=Package
/p:_PackageTempDir=E:\Aplicaciones\GertakariakMS2\Web
</buildArgs>
</msbuild>
</tasks>
On VS2012 and above, you can refer to existing publish profiles on your project with msbuild 12.0, this would be equivalent to right-click and publish... selecting a publish profile ("MyProfile" on this example):
msbuild C:\myproject\myproject.csproj "/P:DeployOnBuild=True;PublishProfile=MyProfile"
I've got a solution for Visual Studio 2012: https://stackoverflow.com/a/15387814/2164198
However, it works with no Visual Studio installed at all! (see UPDATE).
I didn't checked yet whether one can get all needed stuff from Visual Studio Express 2012 for Web installation.
A complete msbuild file with inspiration from CubanX
<Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="Publish">
<RemoveDir Directories="..\build\Release\Web\"
ContinueOnError="true" />
<MSBuild Projects="TheWebSite.csproj"
Targets="ResolveReferences;_CopyWebApplication"
Properties="Configuration=Release;WebProjectOutputDir=..\build\Release\Web;OutDir=..\build\Release\Web\bin\"
/>
</Target>
<Target
Name="Build"
DependsOnTargets="Publish;">
</Target>
</Project>
This places the published website in the Web..\build\Release folder

Gallio and MbUnit in NAnt

I am trying to use Gallio (v3.1)/MbUnit/NCover to run a unit test in my C# code, as part of the build process for my continuous integration system.
I can get Gallio.Echo.exe to execute the tests and output an XML file (albeit it does seem to be checking all .dll files in the folder == approx. 6.5MB .xml file!!), but when I try to get NCover to link in also, it goes bang.
THEN: I tried to use the NAnt task using instructions from here, such as:
<gallio result-property="testrunner.exit-code"
application-base-directory="bin/debug"
runner-type="NCover"
failonerror="false"
report-name-format="gallio-MyTestProject"
report-types="xml"
report-directory="bin/debug">
<runner-property value="NCoverArguments='//q //ea CoverageExcludeAttribute //a MyTestProject.dll'" />
<runner-property value="NCoverCoverageFile='coverage-MyTestProject.xml'" />
<assemblies>
<include name="bin/debug" />
</assemblies>
</gallio>
but I get the following error on my command-line:
Element Required! There must be a least one 'files' element for <gallio ... />.
I have tried to specify the .dll file that I'd like to check, but it still comes up with this message. Any suggestions are most appreciated!
<assemblies> has been changed to <files>

Resources