MSTest and custom messages - mstest

Recently I started using mstest for testing.
Is there any way to write messages to test window if test successed? I don't see the way, messages are alowed only if test fails. What if I want to let say, print little description of a test, so I can see what test means without having to open the test. Or, as now is the case, I'm measuring times of execution for some tests, I want to print that time out.
Is there a way to extend test methods so to easy choose if I want tests with or without time measuring, choosing the mode of test execution?
Thanx

Right click on the columns in the test result window and choose "Add/Remove Columns". Add the columns for "Duration" and "Output (StdOut)". That will give you test timing and let you see what the tests print.

Why not give your tests descriptive names?
[Test]
public void AddsTwoNumbersTogether() {...}
[Test]
public void DividesFirstNumberBySecondNumber() {...}
etc.

Related

Load testing authenticated users

I want to do load testing with Visual Studio but i don't get the idea how to setup load testing with authenticated users.
Imagine my scenario. This should be a quite common problem:
a website where you need to authenticate with username and password.
Perform a action that is only allowed for a authenticated user
What i have done so far:
i have already written UI tests with Selenium:
(this is working quite nice)
UPDATE: My Selenium test class: I want to use this code with the load test.This is a data-driven unit test project as you can see on method TestCase4529
[TestClass]
public class Scenario2
{
private IWebDriver driver;
public TestContext TestContext { get; set; }
[TestInitialize]
public void SetupTest()
{
this.driver = new ChromeDriver();
this.driver.Manage().Timeouts().ImplicitWait = new TimeSpan(0, 0, 30);
}
[TestCleanup]
public void TeardownTest()
{
this.driver.Quit();
}
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\4529.csv", "4529#csv", DataAccessMethod.Sequential), DeploymentItem("4529.csv"), TestMethod]
public void TestCase4529()
{
var userName = TestContext.DataRow["UserName"].ToString();
var password = TestContext.DataRow["Password"].ToString();
// UI Test logic
var loginPage = new LoginPage(this.driver);
loginPage.FillForm(userName, password);
loginPage.LoginButton.Click();
// Some assertions
}
}
Now when i setup the load test in Visual Studio i am asked how many users should do something:
I am not getting what this number means:
Does it only mean number of simultaneous threads?
How can i get a connection between a user (defined in the load test) an a authenticated user in my Selenium test?
What i would like to achieve:
Each user defined in the load test should be a authenthenticated user in my selenium UI test.
May somebody give me an idea how to do that or what i am thinking wrong...
A Visual Studio load test provides a way of running other tests repeatedly and simultaneously. It works best with Web Performance Tests but it can run unit tests and Coded UI tests.
When a "constant load" of (say) 25 users is selected then 25 test cases chosen from the "test mix" of the load test will be started. Whenever one of those test cases finishes another test will be chosen and started so that there are always 25 test cases being executed. That will continue until the end of the test run, which is normally either a test duration or a number of iterations. (Here "iterations" means number of test cases executed.)
Assuming "Web Performance Tests" are being used then those tests are responsible for providing the user authentication. A common way of doing that is to data drive the test and provide user names and corresponding passwords in that data. See here for more detail.
You question asks whether the "constant load" of 25 users means 25 threads. It means that 25 tests cases will be running at the same time, but it does not use Windows threads.
In response to comments:
I think you are misusing or misunderstanding the terminology of Microsoft's test environments. You may be able to have a Selenium test within the test mix of a load test although I have never done it. The user count and the data source are independent items. The user count is about how many simulated users are running tests at the same time. The data source is used by the test cases. If you have 25 users and one data driven test then that test should be started 25 times and those 25 executions should use the first 25 lines of the data source (assuming a Sequential or Unique access method).
To provide user and password you have to check for QueryString Parameters in your recorded webtest and pass data through Data Source see the following images for more detail:
Then pass the recorded webtest into load test as follow:

Testing behavior not consistent when watching actor for termination

When I write tests that involve subscribing to events on the Eventstream or watching actors and listning for "Terminated", the tests work fine running them 1 by 1 but when I run the whole testsuite those tests fail.
Tests also works if each of those tests are in a separate test class with Xunit.
How come?
A repo with those kind of tests: https://github.com/Lejdholt/AkkaTestError
Took a look at your repository. I can reproduce the problems you are describing.
It feels like a bug in the TestKit, some timing issue somewhere. But its hard to pin down.
Also, not all unit test frameworks are created equally. The testkit uses its own TaskDispatcher to enable the testing of what are normally inherently asynchronous processed operations.
This sometimes causes some conflicts with the testframework being used. Is also coincidentally why akka.net also moved to XUnit for their own CI process.
I have managed to fix your problem, by not using the TestProbe. Although im not sure if the problem lies with the TestProbe per say, or the fact that your where using an global reference (your 'process' variable).
I suspect that the testframework, while running tests in parrallel, might be causing some wierd things to happen with your testprobe reference.
Example of how i changed one of your tests:
[Test]
public void GivenAnyTime_WhenProcessTerminates_ShouldLogStartRemovingProcess()
{
IProcessFactory factory = Substitute.For<IProcessFactory>();
var testactor = Sys.ActorOf<FakeActor>("test2");
processId = Guid.NewGuid();
factory.Create(Arg.Any<IActorRefFactory>(), Arg.Any<SupervisorStrategy>()).Returns(testactor);
manager = Sys.ActorOf(Props.Create(() => new Manager(factory)));
manager.Tell(new StartProcessCommand(processId));
EventFilter.Info("Removing process.")
.ExpectOne(() => Sys.Stop(testactor));
}
It should be fairly self explanatory on how you should change your other test.
The FakeActor is nothing more then an empty ReceiveActor implementation.

XCTAssertTrue doesn't stop routine

Xcode doesn't terminate a test routine at failed assertions. Is this correct?
I fail to understand the reason behind this and I'd like it to behave like assert and have it terminate the program.
With the following test, it will print "still running".
Is this intended?
- (void)testTest
{
XCTAssertTrue(false, #"boo");
NSLog(#"still running");
}
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
you can change this behavior of XCTAssert<XX>.
In setup method change value self.continueAfterFailure to NO.
IMO stopping the test after test assertion failure is better behavior (prevents crashes what lead to not running other important tests). If test needs continuation after a failure this means that test case is simply to long an should be split.
Yes, it is intended. That's how unit tests work. Failing a test doesn't terminate the testing; it simply fails the test (and reports it as such). That's valuable because you don't want to lose the knowledge of whether your other tests pass or fail merely because one test fails.
If (as you say in your addition) the test method then proceeds to throw an exception, well then it throws an exception - and you know why. But the exception is caught, so what's the problem? The other test methods still run, so your results are still just what you would expect: one method fails its test and then stops, the other tests do whatever they do. You will then see something like this in the log:
Executed 7 tests, with 2 failures (1 unexpected) in 0.045 (0.045) seconds
The first failure is the XCTAssert. The second is the exception.
Just to clarify, you are correct that if a test generates a "failure", that this individual test will still continue to execute. If you want to stop that particular test, you should simply return from it.
The fact that the test resumes can be very useful, whereby you can identify not only the first issue that resulted in the test failure, but all issues resulting in the test failure.
You say:
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
Your example is ironic, because, yes, I can understand why you'd want to return if the two vectors were different sizes, but if they happened to be the same size, the second half of your example test is a perfect example of why you might not want it to stop after generating a failure.
Let's assume that the vectors were the same size and five of the items were not the same. It might be nice to have the test report all five of those failures in this test, not just the first one.
When review test results, it's just sometimes nice to know all of the sources of failure, not just the first one.

Store value of some property during test execution in Visual Studio

I have started to work with Unit Testing in Visual Studio 2010, .NET 4.0.
Some of our tests fail if a value is below a certain threshold, and then we plan to design the proper exception.
For example, let's suppose some method is expected to return a value greater than 10. I run the tests, and when I open the automatically saved .trx file, it says the test failed.
What I want to know is:
Is there a way to save property values during a test, so that I can know their values after the test list execution has finished?
Here are some resources that explain the different ways to log values. from the link:
[TestMethod]
public void CodedUITestMethod1()
{
Console.WriteLine("Console.WriteLine()");
Console.Error.WriteLine("Console.Error.WriteLine()");
TestContext.WriteLine("TestContext.WriteLine()");
Trace.WriteLine("Trace.WriteLine()");
Debug.WriteLine("Debug.WriteLine()");
}
Keep in mind they are not captured during run time, but flushed at the end, so if there is an unhandled exception the values may be lost. In this case we send key values to a DB or file as a backup.

MS Visual Studio Unit Tests take forever to close - even if there's no code in the test

Why would this be happening? I have a no code in my test:
[TestClass]
public class ApiClientTest
{
[TestMethod]
public void ApiTestSampleUseCase()
{
TestCleanup();
}
public void TestCleanup()
{
//DeleteUsers();
}
}
Yet when I run this, it takes ~1 minute to complete. It passes, it just takes forever. I set a breakpoint after DeleteUsers and it gets hit immediately; the delay is after the test has completed.
Check your bin directory - for some incomprehensible reason, MSTest copies all referenced assemblies to a new directory every time you run the tests. (These are safe to delete, though.) Maybe it does something else as well that causes it to slow down after many runs... According to this answer and this documentation, you can disable this behaviour.
I'd recommend switching to NUnit if possible, preferrably in combination with FluentAssertions (due to the not insignificant number of weird design decisions in MSTest, such as this behaviour, the lack of easy-to-use parameterized tests, and the fact that a test will get run even if [TestInitialize] throws an exception).

Resources