What are the best features of Visual Studio Team System? - visual-studio

Microsoft has a lot of stuff in there, but I'm wondering what features of Visual Studio Team System people really like and really use.
I'm specifically thinking about Team System as opposed to plain old Visual Studio.
What makes it worth the price?

I use the Development version of VSTS2005 and evaluating 2008. My top picks:
Profiler
Coding guidelines -- rules enforcement part

My favorite
Profiler
Integrated Testing Environment: I know a lot of people prefer other test frameworks but having the integration is just sweet.
FxCop

Some of the best features come from adding Team Foundation Server:
Continuous integration builds can be set up to run unit tests on every build
Code coverage figures can be gathered based on the unit test run
Reports of build success, unit test success, code coverage %, etc. can be produced daily
Code check-in can mark a work item (bug report) fixed, or can start the workflow to do so
It not only gives the developers a better idea what's going on with their code, and of how to fix it (unit tests, code coverage, code analysis), it also gives Management an overall picture of the same, without having to come around and bug the developers individually.

I like the line-by-line blame, profiler (as mentioned), but more importantly, I like the reports it produces, such as defect rates over time.
However, even though there are plenty of features that I like, I certainly don't think it provides good value for money.

Related

Sonarqube 6.2 - multilanguage setup, show coverage per language

We do have an pretty old code base, where at the moment everything is handled within (frontend/backend) - to improve our quality, we setup a multilanguage project, now instead of analyzing just Java, we also analyse SCSS, HTML, JS, Xml,...
Well so far everything is running smooth, and working as expected, I am just curious if there is a way to show "coverage per language"? We do have a lot of Java Tests but no JavaScript tests, and it would be pretty neat, to have an overview of "how much tested" the different languages are!
There is also some kind of business value related to this! As the Coverage is now not separated into Integration and Unit tests, it is now also factoring the coverage of the JavaScript files into the overall coverage -> which we can argument easily, but we lose some kind of comparability :D
This is not available synthetically within SonarQube. If it's really important to you, you'll need to use the web services to pull the data and do the calculations externally.

Can ReSharper run unit tests automatically

Is it possible to automatically run unit tests while you work without compiling or running them manually? I am aware that NDepend allows you to do so, but I would prefer to use the ReSharper suite.
This has been available since dotCover 10. See the dotCover documentation for details.
This adds a new panel "Continuous Testing Session" as well as a new status icon in the gutter.
Note that Visual Studio also has this feature, known as Live Unit Testing.
Not possible with Resharper at the moment, you will need something like NCrunch that runs your unit tests continuously in the background, highlighting code that breaks them as you write it and fails your tests.
Edit: At the time of my response it wasn't possible to do this with ReSharper but now in Version 10 it is, see Drew Noakes's answer. You could still give NCrunch a try as it continuously runs your tests in the background even without doing an explicit save.

Sharing Specflow Feature Files with Multiple Applications

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

Test Projects Conversion

I apologize. This is part rant, part question.
For the rant: Dear MS developers who post MVC projects on CodePlex and dedicate their sites to MVC with TDD: I love to learn from you and thanks for the examples, but not everyone has Pro. I'm sick of not being able to load the test project portion of these things because use Standard, which more than covers most needs [with Nunit]. It's annoying.
Now, for the question. :-)
Is there a tool out there to convert these unit test projects to a proj file Studio Standard can open so I can at least compile and view the code?
--
P.S. Dear Microsoft: Enough with not including unit testing with all versions of Studio already. It's silly. Testing is not just an "enterprise" or "pro" feature.
In most cases, switching between test frameworks is just a case of search/replace. If the csproj doesn't load, just create a new project file and drag the code files in and fix them afterwards (along with the references). TestDriven.NET, for example (or NUnit console) are perfectly happy with just a dll/exe as the test project.
Maybe a pain, but not necessarily a huge problem.
If you want to use NUnit with MVC, there are brief instructions here and here - I don't know (haven't tested) whether they apply to Standard.

Best Way of Automating Daily Build

OK, so we all know the daily build is the heart beat of a project, but whats the single best way of automating it?
We have perl scripts wrapping our pipeline which includes ClearCase, VS2005 (C++), Intel FORTRAN, Inno setup. We use cron jobs on UNIX to schedule the build, and host a simple Apache web server to view and monitor the build. All in all its rather complex, I would like to know whats the best off the shelf solution that people use?
And yes I did say FORTRAN no escaping it sometimes, it works, no point doing a huge re-implementation project for some tried and tested FEA code that just works.
A new one to me that I've heard is quite slick is hudson - also with MSBuild support.
We're in the process of implementing CC.Net. So far it seems like it would fit your model pretty well.
Out of the box it offers automated building, results tracking and notification. I'm not sure how detailed the build-in-progress monitoring is though.
There are many tools that specifically handle this:
Cruise Control
Hudson
Continuum
The tools have out of the box support for the most common build types. They all also support some sort of "run this script" type build process.
In the end you should use the nicer build tools (MSBuild, Ant, Maven, Make, ...) where you can and fill the gaps for the odder tools with custom scripts. The automated build can just invoke these in the right order.
Here is the best resource we found to help us pick a Continuous Integration tool. We have been evaluating 5 or 6 tools on this page.
http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix
We use TeamCity - but then its a simple C#/Java development - maybe your pipeline can done via scripts it can drive?
I have had success using Visual Build Pro.
CC.NET is very powerful. Used it and was really happy about it. Even the status icon in the systray. It's a small detail, but it gives you a good overview of the project's "health". You immediately feel motivated to fix the tests when you see it red.
Now we use a self-baked series of scripts. Since we write Python, compilation is non-existant, so the only problem is running the tests.
If you're working with Visual Studio, be sure to check out Team Foundation Build to see if it will suit your situation.
It looks like Buck Hodges' blog post on the VS 2008 version is a good resource, too.
I know this is a really old question, but it's still coming up in searches, so someone should mention Jenkins - the open source continuation of Hudson.
From the Jenkins wiki:
Among those things, current Jenkins focuses on the following two jobs:
Building/testing software projects continuously, just like CruiseControl or DamageControl. In a nutshell, Jenkins provides an easy-to-use so-called continuous integration system, making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. The automated, continuous build increases the productivity.
Monitoring executions of externally-run jobs, such as cron jobs and procmail jobs, even those that are run on a remote machine. For example, with cron, all you receive is regular e-mails that capture the output, and it is up to you to look at them diligently and notice when it broke. Jenkins keeps those outputs and makes it easy for you to notice when something is wrong.
It was originally built with Java in mind, so it integrates well with lots of other Java tools, but you can use it with any language, including all those mentioned by the OP.

Resources