I have created an integration test to test the USERNAME_CONFLICT of my app as it's not allowed to have the same username twice.
What I did is that I called the same request twice in the same test (using TestRestTemplate) so that the second call will return USERNAME_CONFLCT and that's what I want in order for my test to succeed.
My problem is that the test is taking too much time, over 800ms and I don't think this is a good practice. What Ideas do you have to test such cases while the test time remains minimal?
Related
I want to run some code (obtain a oauth valid token) before sending the SOAP requests of my webtests.
I am using Visual Studio.
I want to run the code that obtains the OAuth code before ALL test, not on each one.
¿Is that possible?
Thnaks!!
Assuming the web tests are being run as part of a load test then you can create a load test plugin and run some code from the load test starting event.
Your question reads as if you need to run one web test (possibly it contains just one request) to get the token and then run all the other tests. This can be achieved by using two scenarios in the load test. The first scenario contains the get-token web test, it is set to have one user and one iteration. The second scenario runs all the other web tests, the only change to it is to set its Delay Start Time property to a value that allows the get-token web test to complete.
I have a web performance test that begins with a webforms login, executes a few steps and then finishes.
Mostly this runs without errors but if I extend the load test run beyond 15 minutes I start to get load test failures which fail because some tests send a Session and Auth cookie on the initial Get to the root url.
Clearly the test recording does not have cookies on the initial request. Additionally, I have set the "Percentage of New Users" on the scenarios to 100% to ensure that all tests are running as a new user.
The test is databound to a list of 600 users in a User Pace scenario. Nothing very heavy.
However, I cannot identify why after a period of time (12 minutes) some of the tests begin to send the cookies on the initial request!
Can anyone give me any pointers please?
This is an old question and on re-reading it not very clear.
The scenario reflects more my lack of knowledge of the web testing features I was using.
I am fairly certain it was caused by a missing "log out" test step combined with the configuration of the load test probably re-using connections.
After much prodding around I achieved some clean runs
We are developing 2 different web applications (WARS).
Both use the same message bus (ActiveMQ - jms).
We would like to preform tests that triggers one action on a webapp#1 , that action should induce message throwing that will be consumed on webapp#2 and mutate the DB.
How can we test this end to end scenario??
We would like to have an automated test for that, and would like to avoid manual testing as much as possible.
We are using junit with springframework, and already have tons of junit that are being preformed daily, but non of them so far involved the usage of the message bus. it appear that this scenarion is a whole different story to automate.
Are there any possibilities to test this scenario with automated script (spring \ junit \ other)?
A JUnit test could certainly this integration test sequence:
send a HTTP request to webapp#1 to trigger, using HTTPUrlConnection for example
run a SQL command (using JDBC) to detect wether the database contains the expected value
In the test setup, the database needs to be initialized (rest) so that the second step does not give a false-positive result
I have a Visual Studio 2010 Load test, which contains a number of web performance tests. Running the web performance tests requires you to be logged in to the website under test. Accordingly, the load test contains an initialization step - a small web performance test which does the log in, and which uses a plug-in to cache the cookie so obtained. The 'real' web performance tests - the ones that actually do the work also each have a plug-in that reads the cached cookie and adds it to the test, so that each test functions correctly:
public override void PreWebTest(object sender, PreWebTestEventArgs e)
{
if (CookieCache.Cookies != null) // CookieCache is a static class of mine
e.WebTest.Context.CookieContainer.Add(CookieCache.Cookies);
The problem is that while this all works absolutely fine when I run the load test, it means I can't run any of the web performance tests in isolation because if the load test initializer hasn't run then there's no cookie, so the web performance test won't be logged in and will fail.
Is there any recommended solution for this situation? In other words, if a web performance test needs to have logged in, is there any way to get it to run both in isolation and when it's part of a load test?
The obvious way to run each web performance test in isolation would be to have it call the login test first, but I can't do that because that'll be incorrect behaviour for the load test (where logging in should happen only once per user, right at the beginning of the load test).
The solution is to add the Login test to your individual web performance tests (via "Insert Call to Web Test"), but gated by a "Context Parameter Exists" Conditional Rule that looks for the absence of the context parameter $LoadTestUserContext. That parameter only exists if the web test is running in a load test.
This way you get just one Login whether in or outside of a load test.
Why not try and use the PreRequest Function instead of the PreWebTestFunction
Public Overrides Sub PreRequest(sender As Object, e As PreRequestEventArgs)
MyBase.PreRequest(sender, e)
Dim cookie As System.Net.Cookie = New System.Net.Cookie(...)
e.Request.Cookies.Add(cookie)
That way both the Load test and the Web Test will work.
I'm not familiar with Visual Studio 2010 Load Testing, but it sounds like you need the equivalent of NUnit's SetUp and TearDown methods which run once for all tests, whether you have selected a single test or all the tests in an assembly.
A bit of searching implies that the equivalent is the Init and Term tests.
1) Right click on a scenario node in load test and select Edit Test
Mix...
2) In the edit test mix dialog, Look at the bottom the Form. You will
see 2 check boxes. One for an init test and one for a term test.
The init test will run prior to each user and term test will run when
user completes. To make sure the term test runs, you also need to set
the cooldown time for a load test. The cooldown time is a property on
the run setting node. This setting gives tests a chance to bleed out
when duration completes. You can set this to 5 minutes. The cooldown
period does not necessarily run for 5 minutes. It will end when all
term tests have completed. If that takes 20 seconds, then that is
when load test will complete.
I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe").
It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running.
What to do in such situation?
If you use external resource(DB, web server, FTP, Backup device, server cluster) in test then it rather integration test then unit test. It is not convenient and not practical to start that all external resources in test. Just ensure that your test will be running in predictable environment. There are several ways to do it:
Run test suite from script (BAT,
nant, WSC), which starts MongoDB
before running test.
Start MongoDB on server and never shut
down it.
Do not add any loops with delays in your tests to wait while external resource is started - it makes tests slow, erratic and very complex.
Can't you run a quick test query in a loop with a delay after launching and verify the DB is up before continuing?
I guess I'd (and by that I mean, this is what I've done, but there's every chance someone has a better idea) write some kind of MongoTestHelper that can do a number of things during the various stages of your tests.
Before the test run, it checks that a test mongod instance is running and, if not, boots one up on your favourite test-mongo port. I find it's not actually that costly to just try and boot up a new mongod instance and let it fail as that port is already in use. However, this very different on windows, so you might want to check that the port is open or something.
Before each individual test, you can remove all the items from all the tested collections, if this is the kind of thing you need. In fact, I just drop all the DBs, as the lovely mongodb will recreate them for you:
for (String name : mongo.getDatabaseNames()) {
mongo.dropDatabase(name);
}
After the tests have run you could always shut it down if you've chosen to boot up on a random port, but that seems a bit silly. Life's too short.
The TDD purists would say that if you start the external resource, then it's not a unit test. Instead, mock out the database interface, and test your classes against that. In practice this would mean changing your code to be mockable, which is arguably a good thing.
OTOH, to write integration or acceptance test, you should use an in-memory transient database with just your test data in it, as others have mentioned.