Load testing authenticated users - visual-studio

I want to do load testing with Visual Studio but i don't get the idea how to setup load testing with authenticated users.
Imagine my scenario. This should be a quite common problem:
a website where you need to authenticate with username and password.
Perform a action that is only allowed for a authenticated user
What i have done so far:
i have already written UI tests with Selenium:
(this is working quite nice)
UPDATE: My Selenium test class: I want to use this code with the load test.This is a data-driven unit test project as you can see on method TestCase4529
[TestClass]
public class Scenario2
{
private IWebDriver driver;
public TestContext TestContext { get; set; }
[TestInitialize]
public void SetupTest()
{
this.driver = new ChromeDriver();
this.driver.Manage().Timeouts().ImplicitWait = new TimeSpan(0, 0, 30);
}
[TestCleanup]
public void TeardownTest()
{
this.driver.Quit();
}
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\4529.csv", "4529#csv", DataAccessMethod.Sequential), DeploymentItem("4529.csv"), TestMethod]
public void TestCase4529()
{
var userName = TestContext.DataRow["UserName"].ToString();
var password = TestContext.DataRow["Password"].ToString();
// UI Test logic
var loginPage = new LoginPage(this.driver);
loginPage.FillForm(userName, password);
loginPage.LoginButton.Click();
// Some assertions
}
}
Now when i setup the load test in Visual Studio i am asked how many users should do something:
I am not getting what this number means:
Does it only mean number of simultaneous threads?
How can i get a connection between a user (defined in the load test) an a authenticated user in my Selenium test?
What i would like to achieve:
Each user defined in the load test should be a authenthenticated user in my selenium UI test.
May somebody give me an idea how to do that or what i am thinking wrong...

A Visual Studio load test provides a way of running other tests repeatedly and simultaneously. It works best with Web Performance Tests but it can run unit tests and Coded UI tests.
When a "constant load" of (say) 25 users is selected then 25 test cases chosen from the "test mix" of the load test will be started. Whenever one of those test cases finishes another test will be chosen and started so that there are always 25 test cases being executed. That will continue until the end of the test run, which is normally either a test duration or a number of iterations. (Here "iterations" means number of test cases executed.)
Assuming "Web Performance Tests" are being used then those tests are responsible for providing the user authentication. A common way of doing that is to data drive the test and provide user names and corresponding passwords in that data. See here for more detail.
You question asks whether the "constant load" of 25 users means 25 threads. It means that 25 tests cases will be running at the same time, but it does not use Windows threads.
In response to comments:
I think you are misusing or misunderstanding the terminology of Microsoft's test environments. You may be able to have a Selenium test within the test mix of a load test although I have never done it. The user count and the data source are independent items. The user count is about how many simulated users are running tests at the same time. The data source is used by the test cases. If you have 25 users and one data driven test then that test should be started 25 times and those 25 executions should use the first 25 lines of the data source (assuming a Sequential or Unique access method).

To provide user and password you have to check for QueryString Parameters in your recorded webtest and pass data through Data Source see the following images for more detail:
Then pass the recorded webtest into load test as follow:

Related

Testing behavior not consistent when watching actor for termination

When I write tests that involve subscribing to events on the Eventstream or watching actors and listning for "Terminated", the tests work fine running them 1 by 1 but when I run the whole testsuite those tests fail.
Tests also works if each of those tests are in a separate test class with Xunit.
How come?
A repo with those kind of tests: https://github.com/Lejdholt/AkkaTestError
Took a look at your repository. I can reproduce the problems you are describing.
It feels like a bug in the TestKit, some timing issue somewhere. But its hard to pin down.
Also, not all unit test frameworks are created equally. The testkit uses its own TaskDispatcher to enable the testing of what are normally inherently asynchronous processed operations.
This sometimes causes some conflicts with the testframework being used. Is also coincidentally why akka.net also moved to XUnit for their own CI process.
I have managed to fix your problem, by not using the TestProbe. Although im not sure if the problem lies with the TestProbe per say, or the fact that your where using an global reference (your 'process' variable).
I suspect that the testframework, while running tests in parrallel, might be causing some wierd things to happen with your testprobe reference.
Example of how i changed one of your tests:
[Test]
public void GivenAnyTime_WhenProcessTerminates_ShouldLogStartRemovingProcess()
{
IProcessFactory factory = Substitute.For<IProcessFactory>();
var testactor = Sys.ActorOf<FakeActor>("test2");
processId = Guid.NewGuid();
factory.Create(Arg.Any<IActorRefFactory>(), Arg.Any<SupervisorStrategy>()).Returns(testactor);
manager = Sys.ActorOf(Props.Create(() => new Manager(factory)));
manager.Tell(new StartProcessCommand(processId));
EventFilter.Info("Removing process.")
.ExpectOne(() => Sys.Stop(testactor));
}
It should be fairly self explanatory on how you should change your other test.
The FakeActor is nothing more then an empty ReceiveActor implementation.

Does Await have overhead in test case

I have a very simple test cases(scalatest, but doesn't matter) and I provide two implementation of accessing some resources, this method returns either Try or some case class instance.
Test cases:
"ResourceLoader" must
"successfully initialize resource" in {
/async code test
noException should be thrownBy Await.result(ResourceLoader.initializeRemoteResourceAsync(credentials, networkConfig), Duration.Inf)
}
"ResourceLoader" must
"successfully sync initialize remote resources" in {
noException should be thrownBy ResourceLoader.initializeRemoteResource(credentials, networkConfig)
}
This tests testing different code which access some remote resources
Sync version
def initializeRemoteResource(credentials: Credentials, absolutePathToNetworkConfig: String): Resource = {
//some code accessing remote server
}
Async version
def initializeRemoteResourceAsync(credentials: Credentials, absolutePathToNetworkConfig: String): Future[Try[Resource]] = {
Future {
//the same code as in sync version
}
}
In IDEA test tab I see that future based version is twice slower then sync version, my question is there overhead for calling Await.result explicitly? If not, why it slows down the execution? Appreciate any help, Thanks.
Note: I know it is not the best way to measure performance of production system. But it at list says how much time was spend on each test case.
Yes, there will be a small overhead for Await.result, but in practice it probably doesn't amount to much. Future {} requires an ExecutionContext (thread pool or thread creator) in implicit scope so you won't be able to successfully use it without importing the default execution context (which will simply spawn a thread) or some other context. If you're using the default execution context, for example, you will have two threads instead of one which will involve some overhead for context switching. It shouldn't be much though. If 'twice as slow' means 40ms instead of 20 then perhaps it's not worth worrying about.

Should I write Unit-Tests for CRUD operations when I have already Integration-Tests?

In our recent project Sonar was complaining about a weak test coverage. We noticed that it didn't consider integration tests by default. Beside the fact that you can configure Sonar, so it will consider them (JaCoCo plugin), we were discussing the question in our team if there really is the need to write Unit Tests, when you cover all your service and database layer with integration tests anyway.
What I mean with integration tests is, that all our tests run against a dedicated Oracle instance of the same type we use in production. We don't mock anything. If a service depends on another service, we use the real service. Data we need before running a test, we construct through some factory classes that use our Services/Repositories (DAOs).
So from my point of view - writing integration tests for simple CRUD operations especially when using frameworks like Spring Data/Hibernate is not a big effort. It is sometimes even easier, because you don't think of what and how to mock.
So why should I write Unit Tests for my CRUD operations that are less reliable as the Integration Tests I can write?
The only point I see is that integration tests will take more time to run, the bigger the project gets. So you don't want to run them all before check-in. But I am not so sure if this is so bad, if you have a CI environment with Jenkins/Hudson that will do the job.
So - any opinions or suggestions are highly appreciated!
If most of your services simply pass through to your daos, and your daos do little but invoke methods on Spring's HibernateTemplate or JdbcTemplate then you are correct that unit tests don't really prove anything that your integration tests already prove. However, having unit tests in place are valuable for all the usual reasons.
Since unit tests only test single classes, run in memory with no disk or network access, and never really test multiple classes working together, they normally go like this:
Service unit tests mock the daos.
Dao unit tests mock the database driver (or spring template) or use an embedded database (super easy in Spring 3).
To unit test the service that just passes through to the dao, you can mock like so:
#Before
public void setUp() {
service = new EventServiceImpl();
dao = mock(EventDao.class);
service.EventDao = dao;
}
#Test
public void creationDelegatesToDao() {
service.createEvent(sampleEvent);
verify(dao).createEvent(sampleEvent);
}
#Test(expected=EventExistsException.class)
public void creationPropagatesExistExceptions() {
doThrow(new EventExistsException()).when(dao).createEvent(sampleEvent);
service.createEvent(sampleEvent);
}
#Test
public void updatesDelegateToDao() {
service.updateEvent(sampleEvent);
verify(dao).updateEvent(sampleEvent);
}
#Test
public void findingDelgatesToDao() {
when(dao.findEventById(7)).thenReturn(sampleEvent);
assertThat(service.findEventById(7), equalTo(sampleEvent));
service.findEvents("Alice", 1, 5);
verify(dao).findEventsByName("Alice", 1, 5);
service.findEvents(null, 10, 50);
verify(dao).findAllEvents(10, 50);
}
#Test
public void deletionDelegatesToDao() {
service.deleteEvent(sampleEvent);
verify(dao).deleteEvent(sampleEvent);
}
But is this really a good idea? These Mockito assertions are asserting that a dao method got called, not that it did what was expected! You will get your coverage numbers but you are more or less binding your tests to an implementation of the dao. Ouch.
Now this example assumed the service had no real business logic. Normally the services will have business logic in addtion to dao calls, and you surely must test those.
Now, for unit testing daos, I like to use an embedded database.
private EmbeddedDatabase database;
private EventDaoJdbcImpl eventDao = new EventDaoJdbcImpl();
#Before
public void setUp() {
database = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.addScript("schema.sql")
.addScript("init.sql")
.build();
eventDao.jdbcTemplate = new JdbcTemplate(database);
}
#Test
public void creatingIncrementsSize() {
Event e = new Event(9, "Company Softball Game");
int initialCount = eventDao.findNumberOfEvents();
eventDao.createEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount + 1));
}
#Test
public void deletingDecrementsSize() {
Event e = new Event(1, "Poker Night");
int initialCount = eventDao.findNumberOfEvents();
eventDao.deleteEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount - 1));
}
#Test
public void createdEventCanBeFound() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Company Softball Game"));
}
#Test
public void updatesToCreatedEventCanBeRead() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
e.setName("Cricket Game");
eventDao.updateEvent(e);
e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Cricket Game"));
}
#Test(expected=EventExistsException.class)
public void creatingDuplicateEventThrowsException() {
eventDao.createEvent(new Event(1, "Id1WasAlreadyUsed"));
}
#Test(expected=NoSuchEventException.class)
public void updatingNonExistentEventThrowsException() {
eventDao.updateEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void deletingNonExistentEventThrowsException() {
eventDao.deleteEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void findingNonExistentEventThrowsException() {
eventDao.findEventById(1000);
}
#Test
public void countOfInitialDataSetIsAsExpected() {
assertThat(eventDao.findNumberOfEvents(), is(8));
}
I still call this a unit test even though most people might call it an integration test. The embedded database resides in memory, and it is brought up and taken down when the tests run. But this relies on the fact that the embedded database looks the same as the production database. Will that be the case? If not, then all that work was pretty useless. If so, then, as you say, these tests are doing anything different than the integration tests. But I can run them on demand with mvn test and I have the confidence to refactor.
Therefor, I write these unit tests anyway and meet my coverage targets. When I write integration tests, I assert that an HTTP request returns the expected HTTP response. Yeah it subsumes the unit tests, but hey, when you practice TDD you have those unit tests written before your actual dao implementation anyway.
If you write unit tests after your dao, then of course they are no fun to write. The TDD literature is full of warnings about how writing tests after your code feels like make work and no one wants to do it.
TL;DR: Your integration tests will subsume your unit tests and in that sense the unit tests are not adding real testing value. However when you have a high-coverage unit test suite you have the confidence to refactor. But of course if the dao is trivially calling Spring's data access template, then you might not be refactoring. But you never know. And finally, though, if the unit tests are written first in TDD style, you are going to have them anyway.
You only really need to unit test each layer in isolation if you plan to have the layers exposed to other components out of your project. For a web app, the only way the the repository layer can be invoked, is by the services layer, and the only way the service layer can be invoked is by the controller layer. So testing can start and end at the controller layer. For background tasks, these are invoked in the service layer, so need to be tested here.
Testing with a real database is pretty fast these days, so doesn't slow your tests down too much, if you design your setup/tear down well. However, if there are any other dependancies that could be slow, or problematic, then these should be mocked/stubbed.
This approach will give you:
good coverage
realistics tests
minimum amount of effort
minimum amount of refectoring effort
However, testing layers in isolation does allow your team to work more concurrently, so one dev can do repository and another can do service for one piece of functionality, and produce independently tested work.
There will always be double coverage when selenium/functional tests are incorporated as you can't rely on these alone as they are too slow to run. However, functional tests dont necessarily need to cover all the code, core functionality alone can be sufficient, aslong as the code has been covered by unit/integration tests.
I think there are two advantages of having finer grained(I will not use intentionaly the word unit test here) tests besides the high end integration tests.
1) Redundancy, having the layers covered in more than one place acts like a switch. If one set of tests (the integration test f.ex.) fail to locate the error the second layer may catch it. I will draw a comparison here with electric switches where redundancy is a must. You have a main switch and a specialized switch.
2)Lets suppose that you have a process calling external service. For one or another reason (Bug) the original exception gets consumer and an exception that does not carry information about the technical nature of the error reaches the integration test. The integration test will catch the error, but you will have no clue about what the error is or where is it coming from. Having a finer grained test in place increases the chance of pointing in the correct direction what and where excactly has failed.
I personally think that certain level of redundancy in testing is not a bad thing.
In your particular case if you write a CRUD test with in memory database you will have the chance to test your Hibernate mapping layer which can be quite complex if you are using things like Cascading , or fetching and so on...

Virtual User concurrency in VSTS 2008 Load Testing

I am load testing a server that requires that a user not be connected more than once at a time.
If I bind the VUsers to real users will this ever occur, or can I be sure that that VUser will not be reused until the previous iteration is complete?
I've created a load test to test this.
Roughly:
Method1 {
Trace.WriteLine(userId);
}
Method2 {
Trace.WriteLine(userId + "locked");
Thread.Sleep(5 min);
}
Mix these two up and you'll see that as soon as a UserId is locked up in Method2 you won't see it hit either method again for 5 min. And when all users are locked up the test just sits until one is released.
You can set the user that the load test connects as when you create a coded web test. Adding some code to get the user from a pool could work, but it would be challenging as the code to get a new user could easily become a bottle neck and will open to concurrency/multi threading bugs.

MSTest and custom messages

Recently I started using mstest for testing.
Is there any way to write messages to test window if test successed? I don't see the way, messages are alowed only if test fails. What if I want to let say, print little description of a test, so I can see what test means without having to open the test. Or, as now is the case, I'm measuring times of execution for some tests, I want to print that time out.
Is there a way to extend test methods so to easy choose if I want tests with or without time measuring, choosing the mode of test execution?
Thanx
Right click on the columns in the test result window and choose "Add/Remove Columns". Add the columns for "Duration" and "Output (StdOut)". That will give you test timing and let you see what the tests print.
Why not give your tests descriptive names?
[Test]
public void AddsTwoNumbersTogether() {...}
[Test]
public void DividesFirstNumberBySecondNumber() {...}
etc.

Resources