Non-pass/fail Jenkins result - performance

Is there a way to make Jenkins accept and graph test results that aren't binary passes/fails?
I'm writing a performance test for an Open Source project I contribute to. After each successful build, I would like Jenkins to run a bash script I've written, then report as the test result a value I compute during the test. The value would be on the order of 10k, if that matters. The idea is to allow devs to view the historical performance of the codebase, as well as how their commits changed it.
I'm new to Jenkins, but I've Googled pretty hard and found nothing relevant. Links are appreciated, even if you don't have a full answer.

The Plot plugin should be able to do what you need; you can store the test results in csv format and then graph them across all builds.

Related

How do I include a file dynamically into a TeamCity build

I am fairly new to TeamCity and have recently been tasked with creating various builds, which I have done with no real issues.
What I am trying to do now though is include an external text file into the build output.
The external text file will be received from a service call made during the build.
These are my intended build steps:
Check out solution.
Restore packages.
Run tests.
Call web service with a configurable parameter and receive text file back.
Include text file in build.
Deploy.
Steps 1,2,3 and 6 are covered.
What are my options here? I must confess I do not really know where to begin.
I've spent some time today googling but it has been tricky getting the correct search term to return information on what I am trying to achieve.
I've seen some confusing articles on a 'meta runner'.
Any pointers to get me started in the right direction would be much appreciated.
Thanks.
Use a TeamCity command line build step - https://confluence.jetbrains.com/display/TCD9/Command+Line
I assume you are using build steps for all the other steps you listed so this is simply another of those.
The command line process would run somewhere under your checkout folder and thus anything it downloads would be made available as an artifact for your build

How would I hook into rake's tasks to time how long each takes, to try to eliminate slow bits of build script?

I'm interested in knowing which parts of my rake-based build (running within TeamCity) is slow. Is there an MVC-filter-style way I can wrap rake-tasks so that each one runs within a timer, and I output a breakdown of
time-spent on task including prerequisites (I guess the time between invoke starting and execute finishing)
time-spent on task excluding prerequisites (I guess the time between execute starting and finishing)
so that I can analyse which parts of my build are taking the most time, to target my optimisation efforts?
Does TeamCity have any features baked in that would do this for me? (I know I'll be able to chart the results of my performance-logging with custom-charts; I just wondered whether it could do this out of the box already.)
First, in TeamCity 6.0 there is a tree view of the build log. In this tree view you can see duration times spent for different blocks of your build.
Also, in TeamCity's rake runner, there is "Track invoke/execute stages" option, which can be enabled to get more information in your build log (and there is timing information for each record).
You can also try adding rake parameters like -t or -v in TeamCity rake settings to get more verbose output.
TeamCity also allows you to use custom service messages to provide more information to your log and to your build.
Hope this helps,
KIR

How do you change the layout of JUnit Reports in Hudson?

So, I'm setting up Hudson right now and couldn't be more pleased. However, I need to display a table in the test results page as opposed to the graph it provides. Does anyone know how I would go about doing this?
I guess you'd want to make a custom plugin out of the existing Junit functionality. You can pretty much copy the java files from:
hudson/main/core/src/main/java/hudson/tasks/junit/
and resource files (jellys) from:
hudson/main/core/src/main/resources/hudson/tasks/junit/
to your new plugin (unless you don't want to fork Hudson source). It seems that the files you'd like to fiddle around would be
hudson/main/core/src/main/java/hudson/tasks/junit/History.java (where the graphs are created) and hudson/main/core/src/main/resources/hudson/tasks/junit/History/index.jelly (where the created graphs are shown). From History-class you can pretty easily get a grip on how to fiddle around with TestObjects.
What do you want to display in the table - just the results from the latest build, or the same trend data that the default graph displays ? Either way, I think you'd need to modify the Hudson code to do what you want - see the Hudson Wiki.

Cruise Control .NET time build spends in failed state

My team has a goal to minimize the amount of time that our build is broken.
We use CruiseControl.NET for continuous integration. What I'd like to find out is how best to approach answering the following question:
"In the last {timespan}, how much time has {project-name} spent in a broken status?"
For example:
"Over the last 1 month, how much time has our project spent in a broken status?"
Are there any advanced features of CruiseControl.NET that would facilitate making this information available in some type of a report or somewhere in the dashboard?
Alternatively, how would you approach parsing the xml artifact files to glean this info?
you can use the statistics publisher,
http://www.cruisecontrolnet.org/projects/ccnet/wiki/Statistics_Publisher
and you can display them via project statistics plugin
I see at least two ways to approach this:
You write an external tool which parses CC.NET's XML log files for a project (stored in buildlogs subdirectory by default), calculates statistics and writes a HTML report. This is probably easier to do, but it won't be directly integrated with CC.NET.
You write a CC.NET plug-in to do this. You'll need to do a bit of investigating in this case. My guess the starting point would be to look at the source code of some existing plug-in.
Here are some links about CCNET plugins:
http://www.cruisecontrolnet.org/projects/ccnet/wiki/DevInfo_MakingPlugins
BrekiLabeller - my own plug-in, useful if you want to see how a plug-in can be implemented.
Having had a very quick look at the CC docs, I imagine if you were writing your own Cruise control dashboard, you could consume the RSS feed of build results, parse in all the date times and success/failure states up to your threshold, then sum up the totals.
As for displaying it in a dashboard, I think Cruise Control has a plugin architecture which might help http://cruisecontrol.sourceforge.net/main/plugins.html
So my eventual solution wasn't ideal, but it was easy to do and it works:
I had CC.NET send build emails to an email address (we'll call it build_emails#build_statistics.com). Then I use a ruby script to get the emails via imap and process them to determine our build failure time.
I didn't go the route of directly parsing the xml because I would have had to parse every xml file in the timeframe to build up a timeline and then go over the timeline to make my calculations. It just seemed too complicated to get a simple statistic like this.
I like cc.net, but in this case TeamCity just does this for you. It has lots of other great statistics too. It's free for less than 20 projects.

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources