Should I create separate Benchmark project? - benchmarkdotnet

I want to measure performance of some methods in my console application using BenchmarkDotNet library.
The question is: should I create a separate project in my solution where I will copy the methods I am interested in measuring and do the measuring there or should I add all the attributes necessary for measuring into the existing project?
What is the convention here?

You can think about it as of adding unit tests for your console app. You don't add the tests to the app itself, but typically create a new project that references (not copies) the logic that you want to test.
In my opinion the best approach would be to:
Add a new console app for benchmarks to your solution.
In the benchmarks app, add a project reference to the existing console app.
Add new benchmarks that have all the BDN annotations to the benchmark project, but implement the benchmarks by referencing public types and methods exposed by your console app. Don't copy the code (over time you might introduce changes to one copy and end up testing outdated version).

Related

Project Layout with regards to Exrin and Databases

What is the preferred solution for Exrin project layout when adding a database?
The sample Tesla app had a separate project for the Services and another separate app for the Repository. With the removal of both of those projects in the latest template, it makes the most sense for it to go within the Logic project, but I'm curious if the author had a different preferred implementation.
The Tesla Sample project is designed for a very large app, and Service and Repository don't need to be separated out into a separate project, they can all be referenced directly in the logic app, as per this diagram.
This is the project setup, I now recommend for most projects.

Selenium Testing Architecture

I am trying to optimize the current Automation testing we use for our application. We currently use a combination of Selenium and Cucumber.
Right now the layers we use are:
TEST CASE -> SELENIUM -> Browser.
I have seen recommendations that its better to use TEST CASE -> FRAMEWORK -> SELENIUM -> BROWSER, that way when changes happen in the UI you only need to update the framework and not each test case.
The Question is our scripts are currently broken up into individual steps so when changes to UI happen we only update a script or two, is it better to use this approach with
several scripts that execute for each test case
or go to the framework approach
where the classes, methods, etc. reside in the framework and the test cases just call the methods with parameters for each step?
It depends on:
the life cycle of your testing project, a project with a long life cycle is more worthy to develop a framework for than a short one.
how often you need to update your test cases ( which in turn depends on how often those web pages under test change), a volatile webpage will demand its test scripts to be updated more regularly. Having a framework improves maintainability. (that is, if this framework is well written).
Introduce a framework has the following pros and cons:
pros: easier maintenance, you no longer need to modify your code in multiple test cases, this will save your effort and time. And you get to re-use your framework over and over again for future projects, which will save you time and effort in a long run.
cons: will have development overhead, extra money and effort are required to achieve it. If this project is small and short, the effort and money you spend on introducing a framework may even out-weight its benefits.

How to write a test project with dependency to a ASP.Net/PHP project?

lets say I have three projects in my solution.
1 Is a ASP.Net project simply printing an output
2 Is a PHP project using VS.PHP which simply prints an output (Same output as the ASP.Net project. Just in different environment)
3 A C# Console project which use the above two projects as server and parse their responses.
Now I want to add an other project named "Test" and fill it with unit tests mainly for testing the integrity of the solution.
I am new to unit tests but my main problem here is not about them. It is about this simple question that: How can I run the two first projects (Using VS.Php Webserver for PHP and IIS Express for ASP.Net project - one at each time) somehow before performing my tests? I cant test the 3rd project without having one of the first two active and in result I cant check the integrity of my project. Not even parts of it.
So, do you have any suggestion? Am I wrong about something here? Maybe I just don't understand something.
Using Visual Studio 2013 Update 3
Usually for unit testing you don't connect live systems together with your tests. That would be called integration testing instead. The line I usually use with unit testing is that it needs to a) always be fast and b) be able to be run without network connectivity.
If you want to do unit testing, the easiest way is to make interfaces around your dependent systems. Don't use these names, but something like IAspNetProject and IPhpProject. Code to those interfaces and then replace their implementation with fake data for unit testing.
If you want to do integration testing, then you can use something like http://nancyfx.org/ to create a self hosted web project. There are tons of other options for starting a lightweight web app locally to do testing against.

Can I use Unit Testing tools for Integration Testing?

I'm preparing to create my first Unit Test, or at least that's how I was thinking of it. After reading up on unit testing this weekend I suspect I'm actually wanting to do Integration Testing. I have a black box component from a 3rd party vendor (e.g. a digital scale API) and I want to create tests to test it's usage in my application. My goal is to determine if a newly released version of said component is working correctly when integrated into my application.
The use of this component is buried deep in my application's code and the methods that utilize it would be very difficult to unit test without extensive refactoring which I can't do at this time. I plan to, eventually.
Considering this fact I was planning to write custom Unit Tests (i.e. no derived from one of my classes methods or properties) to put this 3rd party component through the same operations that my application will require from it. I do suspect that I'm circumventing a significant benefit of Unit Testing by doing it this way, but as I said earlier I can't stop and refactor this particular part of my application at this time.
I'm left wondering if I can still write Unit Tests (using Visual Studio) to test this component or is that going against best practices? From my reading it seems that the Unit Testing tools in Visual Studio are very much designed to do just that - unit test methods and properties of a component.
I'm going in circles in my head, I can't determine if what I want is a Unit Test (of the 3rd party component) or an Integration Test? I'm drawn to Unit Tests because it's a managed system to execute tests, but I don't know if they are appropriate for what I'm trying to do.
Your plan of putting tests around the 3rd party component, to prove that it does what you think it does (what the rest of your system needs it to do) is a good idea. This way when you upgrade the component you can tell quickly if it has changed in ways that mean your system will need to change. This would be an Integration Contract Test between that component and the rest of your system.
Going forward it would behoove you to put that 3rd party component behind an interface upon which the other components of your system depend. Then those other parts can be tested in isolation from the 3rd party component.
I'd refer to Micheal Feathers' Working Effectively with Legacy Code for information on ways to go about adding unit tests to code which is not factored well for unit tests.
Testing the 3rd party component the way you are doing it is certainly not against best practices.
Such a test would, however, be classified as a (sub-)system test, since a) the 3rd party component is tested as an isolated (sub-)system, and, b) your testing goal is to validate the behaviour on API level rather than on testing the lower level implementation aspects.
The test would definitely not be classified as an integration test, because you are simply not testing the component together with your code. That is, you will for example not find out if your component uses the 3rd party component in a way that violates the expectations of the 3rd party component.
That said, I would like to make two points:
The fact that a test is not a unit-test does not make it less valuable. I have encountered situations where I told people that their tests were not unit-tests, and they got angry at me because they thought I wanted to tell them that their tests did not make sense - an unfortunate misunderstanding.
To what category a test belongs is not defined by technicalities like which testing framework you are using. It is rather defined by the goals you want to achieve with the test, for example, which types of errors you want to find.

MSBuild, TeamCity, SVN: Best practice for reliable targetting of test/dev/live builds/configs

I wonder if I could get some feedback from people on how to best approach building of Visual Studio solutions.
My core requirements would be to ensure that any code/tests run against the correct resources, in particular, database schema and sample data.
I've tried various ways to do this with mixed degrees of success. Currently, I …
Create a class library *.Installation.dll, which creates, configures
and populates the database, etc.
Have a class library *.Build.dll
which has an MSBuild task that takes parameters from the csproj file
and passes to the Installation.dll.
These sit within their own solution. Say MyApp.Build.sln. I maintain this separately from my main solution, to prevent file locking issues.
In my main solution, say MyApp.sln …
Then, my test projects invoke the MSBuild task to create test environments for integration testing including database and sample test data.
And my Web/Windows front end projects invoke the MSBuild to create runnable environments for test users/my manual testing
So, I am using MSBuild to create customisable builds/environments for testing/running. Additionally, I can wrap the Installation.dll into a configuration/setup tool to automate the installation for the user when the time comes to install.
Is this too complex a scenario? I'm worried I've over engineered this and overlooked something. It works well, but is bound by a lot of meta programming (eg. the database build code, configuration, build task, etc.) that is not directly involved with tangible, chargeable work.
I have SubVersion and TeamCity. I'd like to enable a CI build ultimately that is invokes on a daily/commit build trigger. Or can I use TeamCity in such a way to avoid rebuilding the database/etc. every build?
Thanks for any insight.

Resources