Performance Testing using xUnit framework - xunit

I wonder if it is possible to run performance testing based on xUnit?

Not sure what exactly you asking, but you can easily write your own custom attribute to do that. For an example..
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class TraceAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
//Start timer
}
public override void After(MethodInfo methodUnderTest)
{
//End timer
}
}
Then decorate your Unit Test with the this attribute. Also make sure you write to an output ;)

For JUnit, you might look at JUnitPerf.
If you're working in a different language and a different xUnit, the JunitPerf design concept (decorating normal JUnit tests) and code might give ideas of how to do the same in your language.

Now there is a project on Github : https://github.com/Microsoft/xunit-performance
It provides extensions over xUnit to author performance tests (benchmark).

Related

Changing order of Coded UI test in Visual Studio 2013

I've been tasked with making our coded UI testing come out with passing tests (tests already created, just needed to adjust program code), but I'm having an interesting problem arise with some consistently failed tests. When I test these "failed" tests individually, they pass without any problems. I want to change the order of the tests to see if this can remedy the situation. The [TestInitializer] is set to start the program from the beginning, but isn't doing so when all the tests are running together on another machine with Windows Server 2012 using vstest.console.exe. I think if I can tinker with the order of the tests, I can at least bypass some of the fails that are happening. I see a *.orderedtest, but it doesn't seem to have all the tests that are being tested in there, so I'm not sure if that's the area to mess with. Any advice is greatly appreciated.
Thanks!
you can have some thing like this to make it ordered
[TestMethod]
public void MyIntegratonTestLikeUnitTest()
{
ScenarioA();
ScenarioB();
....
}
private void ScenarioA()
{
// Do your Stuff
}
private void ScenarioB()
{
// Do your stuff
}
or Assign priority to each test like below
[TestMethod]
[Priority(1)]
{
Code.
}
[TestMethod]
[Priority(2)]
{
Code.
}
[TestMethod]
[Priority(3)]
{
Code.
}

Multiple JUnit test in one browser session

I’ve written a program in selenium webdriver but for my next project I would like to make it more maintainable by using better programming techniques. The main part I want to focus on is launching the browser once (1 session) and run say 10 different test then close the browser but I’m not sure how to do this. Using JUnit this is how I currently have my project laid out :
package example1;
public class TestBase { //main class
#Before
public void setup () {
//launch browser
}
#Test //all test run here
public void test1(){
login();
homepage();
}
#After
public void teardown(){
//close browser
}
}
package example1;
public class login(){
//do some action
}
package example1;
public class homepage(){
//do some action
}
package example1;
public class storeMethods(){
//all methods are stored which are then called by different classes
}
I’m not sure if the #Test annotation should even be in the main class or if it should be in its own class (login(), homepage()) because I read somewhere that test should not depend on each other. I don’t have much experience in java but I’m more than willing to learn. I just need some guidance on best practices and how to write good maintainable test so if someone could help me out or point me in the right direction then I’d really appreciate it.
While what Robbie Wareham said is correct, reusing the browser is not a good idea, you said that your overall goal is maintainability.
The techniques I've found to increase maintainability is the Page Object pattern with separate functions to interact with it.
The Page Object pattern separates the selector from the rest of the code. That way, if an element on a page changes, and your tests uses that element 5 times...you only change your code in 1 spot. It is also standard to include isLoaded(), which is a function that can be used to identify if you are already on the page you need so you don't reload the page.
I would also recommend having your test not directly deal with that Page you created. If you had a toolbar that you had to use to go to X page...and then the toolbar changed so the link you wanted was in a sub-menu, then every time in your tests you used that link, you would have to change the method to click on that link. Creating sets of selenium commands that interact with the page will make your tests high-level and easy to read.
I would suggest that reusing the browser is not following better automation programming practice.
Reusing the browser will result in unstable and unreliable tests, with inter-test dependencies.
In my opinion, it is far better to have atomic self contained tests.
If test runtime is an issue, then look at parallelism and using selenium grid

How to order methods of execution using Visual Studio to do integration testing?

I have 2 questions in regard doing integration testing using VS 2010
First, I'm really in need of finding a way to execute these testing methods in the order I want them to. Note: I know in Unit Testing, methods should run standalone from anything else, but these are integration tests which I do depend on the order of which method runs first.
On the same note, is there a way to keep a local variable through running the tests? For example like the following code which right now fails.
[TestClass]
public class UnitTest1
{
int i = 0;
[TestMethod]
public void TestMethod1()
{
i = 5;
}
[TestMethod]
public void TestMethod2()
{
Assert.AreEqual(5, i);
}
}
So is there a way to do any of these?
To execute tests in a specific order I followed the next steps:
In a test project with test1, test2 and test3
1 Right click on the project 'Add'->'new test..."
2 Select 'Ordered Test'
3 Double click in the file that appears "OrderedTest1.orderedtest"
4 Build the project if was not build previously
5 From the list of available test select the test you want and order them
From that point on there appears a new test in the test list editor
It is an extra test that runs the enclosed tests in the correct order, but if you run all the test in the project carelessly the tests included in the ordered list will be executed twice so you need to somehow manage lists or test categories to avoid that.
I tried disabling the individual tests but that also disables the ordered test, I don't know a better way to do so.
It is best practice to use functions to set up the tests and to clean them up, by using the attributes [TestInitialize] and [TestCleanUp] or [ClassInitialize] and [ClassCleanup].
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting(v=VS.100).aspx
The next code is an example of a similar thing to what you want:
[TestClass]
public class UnitTest1
{
int i=0;
[TestInitialize]
public void Setup()
{
i = 5;
}
[TestMethod]
public void TestMethod1()
{
Assert.AreEqual(5, i);
}
}
The function SetUp will be called before executing each test.
If you need to pass the value from one test to the other you might want to consider using a static variable which is not recommended due to the indeterministic order of execution.
Usually there is a way to avoid needing a specific order by using the setup/cleanup technique, but it is true that this might not be true for very complex integration tests.
If there is no possible way to avoid having them to reorder you can consider merging them in one, breaking again the best practice of having only one assert per test, but if they are so much dependent one from the other it might be even better this way, as in this case one test failing might compromise the result of the others.
EDIT:
May be using ordered tests answers question 1, and using static variables question 2:
http://msdn.microsoft.com/en-us/library/ms182631.aspx

How do I skip specific tests in xUnit based on current platform

I have an assembly that I've built on Windows
I want to run the xUnit tests on mono in Linux.
However, I have found that while 400 of these tests can run (in order), that certain tests either hang the xUnit runner, or bring it down entirely.
I don't care if certain tests are not able to run on Linux, certain tests are to do with the DTC and some unmanaged gumph that we don't need to support there.
What I do want however, is to apply an ignore to those tests, and have the fact that the test was ignored flagged properly in the build output.
The question can be boiled down to I guess a number of possible solutions
How do I run specific tests in xUnit via the console runner? (I haven't found documentation to this end, maybe I'm just not looking hard enough)
Is it possible to go the other way and say "Here is an assembly, please ignore these specific tests though"
Having an attribute on those tests has been suggested a better way, to formally document that these tests are platform specific - is this possible?
If I could avoid modifying the original code too much that would be grand, as the code isn't really mine to change, and applying lots of cross-platform hacks probably won't go down too well.
XUnit v2.0 is now available. Skippable tests are supported by it directly. Use:
[Fact (Skip = "specific reason")]
I would avoid externalising skipping tests (i.e. a config/command file if it's possible). This somewhat goes against making the tests easy to run and trustworthy. Making the tests ignored in code is the safest approach when other people start to get involved.
I could see a number of options, here are two that involve modification of existing code.
Option 1 - Most intrusive, compile time platform detection
In the VS Solution, define another configuration that defines a precompiler flag MONOWIN (just so that it's explicitly a flag the says that it is for code compiled on Windows for use on Mono).
Then define an attribute that will make the test ignored when compiled for Mono:
public class IgnoreOnMonoFactAttribute : FactAttribute {
#if MONOWIN
public IgnoreOnMonoFactAttribute() {
Skip = "Ignored on Mono";
}
#endif
}
It's actually hard to find any advantages to this method as it involves mocking with the original solution and adds another confiration that needs to be supported.
Option 2 - somewhat intrusive - runtime platform detection
Here is a similar solution to option1, except no separate configuration is required:
public class IgnoreOnMonoFactAttribute : FactAttribute {
public IgnoreOnMonoFactAttribute() {
if(IsRunningOnMono()) {
Skip = "Ignored on Mono";
}
}
/// <summary>
/// Determine if runtime is Mono.
/// Taken from http://stackoverflow.com/questions/721161
/// </summary>
/// <returns>True if being executed in Mono, false otherwise.</returns>
public static bool IsRunningOnMono() {
return Type.GetType("Mono.Runtime") != null;
}
}
Note 1
xUnit runner will run a method twice if it is marked with [Fact] and [IgnoreOnMonoFact]. (CodeRush doesn't do that, in this case I assume xUnit is correct). This means that any tests methods must have [Fact] replaced with [IgnoreOnMonoFact]
Note 2
CodeRush test runner still ran the [IgnoreOnMonoFact] test, but it did ignore the [Fact(Skip="reason")] test. I assume it is due to CodeRush reflecting xUnit and not actually running it with the aid of xUnit libraries. This works fine with xUnit runner.
There is a new options now.
Add Nuget Package SkippableFact, which allows you to use [SkippableFact] instead of [Fact] and you can use Skip.<xyz> within a Tests to dynamically Skip the Test during runtime.
Example:
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
[Fact(Skip="reason")]
works but I prefer to use traits
[Fact, Trait("type","unit")]
public void MyUnitTest(){
// given
// when
// then
}
[Fact, Trait("type","http")]
public void MyHttpIntegrationTest(){
// given
// when do things over HTTP
// then
}
usage
dotnet test --filter type=unit
this protects our builds from accidentally running integration tests that devs forgot to skip e.g. [Fact(Skip="Integration")], however it does require unit tests to "opt in" to CI by adding the correct traits which admittedly isn't great.
The Dominik's solution work for me by this code:
[SkippableFact]
public void Get_WhenCall_ReturnsString()
{
// Arrange
Skip.IfNot(RuntimeInformation.IsOSPlatform(OSPlatform.Windows));
// Act
// Assert
}
To add to the previous answers regarding SkippableFact: Note that each of the tests are still constructed - the constructor is run.
If you have timeconsuming code in a base class constructor, an alternative is to gather environment-specific test cases in suitable files and run the environment check in the constructor:
if (!SupportsTemporalQueries())
throw new SkipException("This test class only runs in environments support temporal queries");
This can speed up the test run considerable. In our system we either extend a "generic" base test class (runs in all environments) or an environment-specific base test class. I find this easier to maintain than filtering in pipelines or other solutions.
This is now solved in 1.8 - you can filter on Traits. See this issue log.
Update: Traits work with the console runner but not MSBuild, I've added a feature request for this support.

UI interface and TDD babysteps

OK, having tried my first TDD attempt, it's time to reflect a little
and get some guidance, because it wasn't that successful for me.
The solution was partly being made with an existing framework, perhaps
making TDD less ideal. The part that seemed to give me the biggest
problem, was the interaction between the view and controller. I'll
give a few simple examples and hope that someone will tell me what I
can do better wrong.
Each view's interface inherits from a base interface, with these
members (there are more):
public interface IView
{
void ShowField(string fieldId)
void HideField(string fieldId)
void SetFieldVisibility(string fieldId, bool visible)
void DisableField(string fieldId)
void ShowValidationError(string fieldId)
...
}
The interface for a concrete view, would then add members for each
field like this
public interface IMyView : IView
{
string Name { get; set; }
string NameFieldID { get; }
...
}
What do you think of this? Is inheriting from a common interface a
good or bad idea?
One on the things that gave me trouble was, that first I used
ShowField and HideField and the found out I would rather use
SetFieldVisiblity. I didn't change the outcome of the method, but I
had to update my test, which I seem should be necessary. Is having
multiple methods doing the same thing, a bad thing? On one hand both
methods are handy for different cases, but they do clutter the
interface, making the interface more complex than it strictly have to be.
Would a design without a common interface be better? That would remove
the fieldID, I don't why, but I think the fieldID-thing smells, I
might be wrong.
I would only make the Show and Hide methods, when needed, that is if
they would be called by the controller. This would be a less generic
solution and require more code in the view, but the controller code
would be a bit more simple.
So a view interface might look like this:
public interface IMyView
{
void ShowName()
void HideName()
string Name { get; set; }
int Age { get; set; }
}
What do you want to test? Whether Show* will make an widget in the UI visible? What for?
My suggestion: Don't try to figure out if a framework is working correctly. It's a waste of time. The people who developed the framework should have done that, so you're duplicating their work.
Usually, you want to know if your code does the right thing. So if you want to know if you are calling the correct methods, create mockups:
public class SomeFrameworkMockup extends SomeFramework {
public boolean wasCalled;
public void methodToTest() {
wasCalled = true;
}
}
Build the UI using the mockups.
The second thing to test is whether your algorithms work. To do that, isolate them in simple helper objects where you can all every method easily and test them with various inputs.
Avoid the external framework during tests. It only confuses you. When you've built a working product, test that using your mouse. If you find any problems, get to the root of them and only then, start writing tests against the framework to make sure this bug doesn't appear again. But 90% of the time, these bugs will be in your code, too.
At the moment I don't really see the added value of the common interface.
I think a better solution would be to have some properties on the controller class: IsControlXYZVisible. You can then databind the visible property of the control to this property.
And your unit test will test the value of IsControlXYZVisible, which will be easier to acomplish.
I also don't understand why you say you had a bad experience with TDD. I think your application architecture needs more work.
Your question is a little bit obscure for me but the title itself calls for a link :
The Humble Dialog box
And when you ask if it(s bad to have two functions doing the same thing, I say "Yes it's bad".
If one is calling the other, what's the point of having two functions ?
If not, you have a code duplication, that is a bug waiting to sprout whenyou update one and not the other.
In fact there is a valid case where you have two nearly identical functions : one that check its arguments and one that does not but usually only one is public and the other private ...

Resources