On our applications we have a lot of functional tests through selenium.
We understand that it is a good practice to have the server where the tests are ran as similar as possible to the production servers, and we try to follow it as much as possible.
But that is very hard to achieve in 100%, so we have a different settings file for our server for some changes that we want in the staging environment (for example, we opt to turn e-mail sending off because of the additional required architecture).
In fact, lots of server frameworks recommend having an isolated front-controller (environment) for testing to easily achieve this small changes.
By default, most frameworks such as ours recommend that their testing environment should have its cache turned off. WHY?
If we want to emulate production as much as possible, what's the possible advantage of having the server's cache turned off when performing functional tests? There can be bugs that are only found with the cache on, and having it on might also have the benefit of accelerating our tests execution!
Don't we just need to make sure that the cache is cleared before starting a new batch of functional tests, the same way we clear the cache when deploying a new version to production?
A colleague of mine suggests that the reason for this is could be that cache can generate false-positives, errors that are not caused by badly implemented features (that are the main target of those tests), but of the cache system itself... but even if those really happen (I suppose it depends on how advanced is the way the cache is used), why would they be false-positives?
To best answer this question I will clarify some points.
(be aware that this is based on my experience)
Integration tests using the browser are typically "Black Blox Tests" , which means that they are made without knowledge of the code. That is, without knowing whether the cache is being used or not.
These tests are usually designed based on certain tasks that are performed during the normal use of the system. But, these tasks are chosen for automation depending on certain conditions of use (mainly reusability, and criticality/importance but also the cost of implementation). So most of the times, we will not need/wont to test caching behaviour.
By convention, a test (any) must be created with a single purpose and have the less possible dependencies. Why?
When the test fails , we can quickly find the source of the failure.
Smaller tests are easier to extend, fix, remove...
We do not spend too much time, first debugging the test code and then
debugging the system code.
Integration testing should follow this convention.
Answering the question:
If we want to check a particular task, we must isolate it as possible.
For example, if we want to verify that the user correctly logs in, we have to delete the cookies to be sure that they do not influence the result (because they may). If on the other hand, we want to test the cookies we have to somehow use an environment where they are not deleted.
so, in short:
If there is need to test the caching behaviour then we need to create an "isolated" environment where this is possible.
The usual integration tests purpose is to test the functionality, so the framework default value it's to have the cache disabled.
This does not means that we shouldn't create our own environment to test the caching behaviour.
Related
For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.
I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.
End to end testing means exercising an application from the outer boundaries to verify its behavior. This far I've only done written tests for a single executable artifact. How should I test systems made up of multiple artifacts that is deployed on different hosts?
I see two alternatives.
The tests set up the whole system and exercise it from the very outer edges.
Each artifact is end to end tested in isolation, relying on the test content to enforce the protocol between them.
Is there a clear case for only adhering to one of these, or are one of them preferred, or are they interchangeable? If interchangeable, then what are some advantages and disadvantages between them?
Even though I think it depends on the context, I prefer the first alternative. Here are my random thoughts:
I like my tests to be as closely mapped to use cases as possible (BDD style) (with the disclaimer that I misuse the term use case). These use cases may span several applications and sub-systems.
Example: A back office administrator can view a transaction made by a user from the public interface.
Here, the back office admin interface and the public interface are different applications, but they are included in the same use case.
Mapping these thoughts to your problem where you have sub-systems deployed on different hosts, I would say it depends on how it is used, from the user/actor perspective. Do the use cases span several sub-systems?
Also, perhaps the fact that the system is deployed on several hosts isn't important to the tests. You could replace the inter-process communication with method calls in your tests and have the whole system within the same process during tests, reducing the complexity. Supplement this with tests that only verify the inter-process communication.
Edit:
I realise that I forgot to include why I prefer to test the whole system.
Your asset is features, that is, behaviour, while the code is a liability. Therefore you'd like to test the behaviour, not the code (BDD style).
If you are testing each sub-system separately you are testing the code, not the features. Why? When you divided your system into sub-systems you did so based on some technical reasons. When you learn more you might discover that the chosen seam is sub-optimal and would like to move some responsibility from one sub-system to another. And you would have to modify test and production code at the same time, leaving you without a safety net. That's a typical symptom of testing implementation details.
That said, these kind of tests are too blunt to test everything. So you need to have complementary tests for details as well, where necessary.
Testing each artifact end-to-end separately would be highly desiderable in any case. This will ensure that every artifact is sound.
In addition, you might want to test a composition of artifacts. That would catch problems in the interactions between artifacts. I don't know about your situation, but one thing that is important to have is a test environment that is a copy of production. Testing the system in the test environment is a very good idea. You might also want to test the system in the production environment; this might be feasible or not. For instance, if your system processes credit card payments, you may want to avoid test payments on the production system.
In any case, testing each system separately is imho more important than testing the composition. Once you know that your artifacts are sound in isolation, catching interaction tests will be much easier. If you only have the end-to-end test of the whole system, it's much more difficult to understand where is the error when the tests fail.
We're currently using MBUnit for both unit testing and UI testing. For UI testing setup cost for test matrix axes are pretty high (login, browser instance, navigate to page etc). In order to avoid setting up these for each test case we are partly relying on AssemblyFixture to manage some of them.
However because it's not possible to filter out certain cases where they are not applicable to certain combination, it's not possible for us to really use such optimization. So currently we are doing some of the setup per test-case, horribly inefficient.
We could put if statements inside test code to check for correct combinations but we don't desire that either. It pollutes test code.
How do you guys do such optimizations? or test matrix management? Is there a better practice, in another testing framework?
Until recently, I've always thought of UI Automation as black box testing where my UI tests drive against a fully stand alone web site or application. As a result, the tests run under the constraint of normal execution and are subject to a host of environment overhead issues.
I've recently adopted the notion of "shallow" and "deep" UI tests where each set of tests run under an optimized configuration to ease environmental differences and speed things up. For example, the login controller is swapped out with a mechanism that avoids OAuth login overhead and is hard coded with fixed usernames. The product catalog skips database lookup and is hard coded with a few fixed items. The ecommerce backend is swapped out to perform speedy operations that accept/reject transactions based on credit card and amount.
Under a "shallow" configuration I can perform "deep" testing against the UI logic. When I switch to a "deep" configuration, it resembles production and I can perform "shallow" testing of fully integrated components such as login, product catalog, search, etc.
A mix of testing strategies is required.
May be the ui-test-automation-best-practices article is helpful for you. It has some examples how to improve performance of automating ui testing by minimizig logins and context changes.
I'm relatively inexperienced when it comes to Unit Testing/Automated Testing, so pardon the question if it doesn't make any sense.
The current code base I'm working on is so tightly coupled that I'll need to refactor most of the code before ever being able to run unit tests on it, so I read some posts and discovered Selenium, which I think is a really cool program.
My client would like specific automated tests to run every ten minutes on our production server to ensure that our site is operational, and that certain features/aspects are running normally.
I've never really thought to run tests against a production server because you're adding additional stress to the site. I always thought you would run all tests against a staging server, and if those work, you can just assume the prouction site is operational as long as the hosting provider doesn't experience an issue.
Any thoughts on your end on testing production code on the actual production server?
Thanks a lot guys!
Maybe it would help if you thought of the selenium scripts as "monitoring" instead of "testing"? I would hope that every major web site out there has some kind of monitoring going on, even if it's just a periodic PING, or loading the homepage every so often. While it is possible to take this way too far, don't be afraid of the concept in general. So what might some of the benefits of this monitoring/testing to you and your client?
Somehow not all the best testing in the world can predict the odd things users will do, either intentionally or by sheer force of numbers (if 1 million monkeys on typewriters can write Hamlet, imagine what a few hundred click happy users can do? Pinging a site can tell you if it's up, but not if a table is corrupted and a report is now failing, all because a user typed in a value with a umlaut in it.
While your site might perform great on the staging servers, maybe it will start to degrade over time. If you are monitoring the performance of those selenium tests, you can stay ahead of slowness complaints. Of course as you mentioned, be sure your monitoring isn't causing problems either! You may have to convince your client that certain test are appropriate to run every X minutes, and others should only be run once a day, at 3am.
If you end up making an emergency change to the live site, you'll be more confident knowing that tests are running to make sure everything is ok.
I have worked on similar production servers from long time. From my experience, i can say is that, Always it is better to test our change changes/patches in Stage environment and just deploy it, in production servers. This is because, both Staging and Production environments are alike, except the volume of data.
If really required, it would be ok, to run few tests on Production servers, once the code/patch is installed. But it is not recommended/good way to run the tests always on the production server.
My suggestion would be to shadow the production database down to a staging/test environment on a nightly basis and run your unit tests there nightly. The approach suggested by the client would be good for making sure that new data introduced into the system didn't cause exceptions within the system, but i do not agree with doing this in production.
Running it in a staging environment would give you the ability to evaluate features as new data flows into the system without using the production environment as a test bed.
[edit] to make sure the site is up, you could write a simple program which pings it every 10 minutes rather than running your whole test suite against it.
What will change in production environment that you would need to run automated tests? I understand that you may need monitoring and alerts to make sure the servers are up and running.
Whatever the choice, whether it be a monitoring or testing type solution, the thing that you should be doing first and foremost for your client is warning them. As you have alluded to, testing in production is almost never a good idea. Once they are aware of the dangers and if there are no other logical choices, carefully construct very minimal tests. Apply them in layers and monitor them religiously to make sure that they aren't causing any problems to the app.
I agree with Peter that this sounds more like monitoring than testing. A small distinction but an important one I think. If the client's requirements relate to Service Level Agreements then their requests do not sound too outlandish.
Also, it may not be safe to assume that if the service provider is not experiencing any issues that the site is functioning properly. What if the site becomes swamped with requests? Or perhaps SQL that runs fine in test starts causing issues (timeouts, blocking etc.) with a larger production database?