WebDriver/Ruby test reporting? - ruby

My team is currently building a WebDriver test framework in Ruby. We are looking for a way to generate test completion reports so they can be emailed out, ideally included individual test and test verification point results.
As an example of what I mean when I say test verification points, a test which creates a product could have multiple verification points such as did the product name get created correctly, did the product price get created correctly. If the test completion report could specify which verification point failed it would make assessing failures a lot quicker.
The reports that can be output from the selenium IDE are pretty much what I'm after.

Since you are using Ruby, you can use consider storing your verification points outcomes, test case status etc in a DB such as MySQL or Sqlite. This gives you the ability to perform various analysis on the health of your tests in the past and present. Based on this you can even predict the future trend.

Maybe Allure report and respective RSpec adapter could suit your requirements? This report is rather new and gives you wide range of features like grouping tests by BDD features and stories, saving attachments, parameters and so on.

Related

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Integrating JMeter performance tests with Jenkins CI

I'm working on integrating performance tests with CI/CD infrastructure. My performance testing tool is JMeter and CI server is Jenkins. Both can do their jobs, but when it comes to integrating performance tests in CI/CD pipeline, things are no longer so trivial.
To have proper deployment pipeline CI server needs to know when a performance test build should be considered passed or failed. Verification of mean average of response times is not a good option - completely different SLAs can apply to different types of transactions executed as part of the same JMX file. Asserting on mean average of response times for particular transaction type is a far better option, but it is still far from a perfect solution. This won't tell us e.g. if response times for same type of transaction are increasing (which can has something to do with memory leaks) or decreasing (which can be a blessing of server-side cache). For that reason, relying only on mean avg response times can create false confidence in software quality.
I analysed a couple of tools, including JMeter Maven Analysis Plugin and Jenkins Performance Plugin. None of them seems to offer what I'm looking for.
In pre-CI era, performance tests were executed late in the development lifecycle and analysed by a human being. I wonder if anyone came across any advance enough tool, which can make it possible for CI server to reliably determine if perf test build should be marked as passed or failed, without verification of results by a human?
In absence of tools offering what I'm looking for, I've decided to kick off an open source project create one on my own, in my free time:
https://github.com/automatictester/lightning
It's still in its early days, but the core functionality is there. Now it's a matter of time to extend it with extras.
I am a Jenkins expert but only mildly skilled with JMeter. Can your JMeter results be processed by a script in order to tell when Transaction Type Z passes beyond the acceptable time limit for the run?
It looks like parsing the jmeter results with some additional logic is what you need to be able to bubble up an exit(1) (or any non-zero) value.

MBUnit test matrix optimization-performace problems in automated ui tests

We're currently using MBUnit for both unit testing and UI testing. For UI testing setup cost for test matrix axes are pretty high (login, browser instance, navigate to page etc). In order to avoid setting up these for each test case we are partly relying on AssemblyFixture to manage some of them.
However because it's not possible to filter out certain cases where they are not applicable to certain combination, it's not possible for us to really use such optimization. So currently we are doing some of the setup per test-case, horribly inefficient.
We could put if statements inside test code to check for correct combinations but we don't desire that either. It pollutes test code.
How do you guys do such optimizations? or test matrix management? Is there a better practice, in another testing framework?
Until recently, I've always thought of UI Automation as black box testing where my UI tests drive against a fully stand alone web site or application. As a result, the tests run under the constraint of normal execution and are subject to a host of environment overhead issues.
I've recently adopted the notion of "shallow" and "deep" UI tests where each set of tests run under an optimized configuration to ease environmental differences and speed things up. For example, the login controller is swapped out with a mechanism that avoids OAuth login overhead and is hard coded with fixed usernames. The product catalog skips database lookup and is hard coded with a few fixed items. The ecommerce backend is swapped out to perform speedy operations that accept/reject transactions based on credit card and amount.
Under a "shallow" configuration I can perform "deep" testing against the UI logic. When I switch to a "deep" configuration, it resembles production and I can perform "shallow" testing of fully integrated components such as login, product catalog, search, etc.
A mix of testing strategies is required.
May be the ui-test-automation-best-practices article is helpful for you. It has some examples how to improve performance of automating ui testing by minimizig logins and context changes.

How would you stress test a dynamic site, when you don't know what the URLs will be ahead of time?

This isn't a question of what stress testing tools are out there. I'm afraid it's a lot harder than that. (At least for me)
Consider a restful architecture for a forum or blog that generates random IDs for each post.
Simulating creating those topics/articles would be simple, because you'd just be posting form data to an endpoint like: /article, or /topic
But how do you then stress test commenting on those articles/topics? This is different, because the comments need to belong to an article/topic, which means that you need the ids of those items. However, if all you can do is issue posts, and you have no way of pulling those ids, you'd be unable to create them.
I'm creating a site that is similar in this regard, and I have no idea how to stress test the creation of the comments.
I have two ideas, and they're both pretty awful:
Generate a massive system ahead of time with some kind of factory, and then freeze it. From there, I figure I'd have to use some kind of browser automation to create my 'comments' on all of this. The automation would I suppose go through a recording proxy, like what JMeter offers. Then, to run the test, I reload the database, and replay the massive log file.
Use browser automation for the whole thing, taking advantage of the dynamic links delivered in the HTML page. The only option here would be Selenium, and really, we're talking a massive selenium grid that would be extremely expensive. Probably very difficult to maintain also.
Option 2 is completely infeasible near as I can tell, but option 1 sounds excruciating. I'm really hoping someone can suggest something more clever.
Option 1.
I mean, implementation notes aside, you're basically just asking for a testing environment. So, the answer is to make one. In whatever fashion:
Generate it
Make it once and reload it
Randomise it
Whatever. It's the approach to go with.
How do you your testing is kind of a side issue (unit testing/browser/whatever, up to you).
But you've reached a point where you need to test with real data. So make it happen.
This is a common problem. We handle it by extracting the dynamic parts of the URLs from the server responses. I presume this system uses web browser client - which implies that those URLs are being sent in the server responses. If they are in the responses, then you CAN get them. However, since you said "if all you can do is issue posts, and you have no way of pulling those ids", then perhaps this is not the case? In that case, can you clarify?
We've recently been doing a lot of testing of Drupal systems for our customers - which has exactly the problem you've described. We either solve it by extracting the IDs dynamically from the page as the user browses to the page they want to comment on, or we use option 1, or a combination of both. Note that if you have a load testing tool handy, then generation of content is not too difficult - use the tool to do it. I.e. run a "content generation" load test. Besides yielding useful data on its own accord, that will give you a test database that you can then backup/restore as needed to maintain your test infrastructure. Now you can run the test on a more realistic environment - one that has lots of content already in it (assuming, of course, that this is realistic for your purposes).
If you are interested, I'd be happy to demo how we solve the problem using our software (Web Performance Load Tester).
I have used Visual Studio to solve this kind of problem. Visual Studio allows C# coded web tests that can programatically parse the html returned and take action based on that.
I was load testing a SharePoint website and required information to be populated ahead of time. I did create a load test that was specifically for creating "random" pages of content ahead of time. I populated a test harness database with the urls ahead of time, allowing some control over the pages that were loaded.
With a list of "articles" available and a list of potential comments, it is possible to code a pseudo-random number generator (inside a stored procedure because of the asynchronous nature of the test harness) to get a repeatable load test. That meant that the site would be populated in the same way each time the load test was run.
It does take some effort to create a decent way of populating the site off the bat, but the return in the relevance of the load test is quite good.

How to be successful in web user interface testing?

We are setting up a Selenium test campaign on a big web application.
The first thing we've done was to build a framework which initialize SQL data in database before the test, launch the test, archive results and then clear data.
We've integrate that in a Maven 2 process, run every day by TeamCity on a dedicated database.
We've set up several Selenium tests now but It's not as used as planned.
Reasons are that tests are sometimes broken for other reasons than regressions (data may have changed, stored procedure may have been recompiled and so on).
I would like to know if there are big success in user interface testing and if so, reasons to that. Commons errors may also interest me.
Testability helps a lot. The biggest win for testability in web apps is if all of the HTML elements you need to interact with on the page have unique and consistent attributes. If the attributes you are using to identify the HTML elements (Selenium uses xpath) are not consistent/reliable from build-to-build, or session-to-session, your test scripts will fail. Also, these attributes must be unique, so that the automation tool (in this case Selenium) can reliably find the object on the web page.
If you want reliable unit tests, you need to have the same input. Starting state of the database is the input. So, you need to have the same starting database each time. Of course, if you wish to do testing with different input, you need to create another unit test (as results will obviously not be the same).
When I do stuff like this, I always use the same database as a starting point. Of course, some of the tests might fail without modifying the database is correct way, so some other subsequent tests might fail as well even though they wouldn't otherwise. If your unit-test tool allows, you should define dependencies between tests to make sure that those tests will not be run at all when the 'parent' one fails.
I use http-unit which has the added benefit of working before any styling has been added to the page.
http://httpunit.sourceforge.net/
You can attach the tests to run in the integration test phase for maven2.
From the site
Written in Java, HttpUnit emulates the
relevant portions of browser behavior,
including form submission, JavaScript,
basic http authentication, cookies and
automatic page redirection, and allows
Java test code to examine returned
pages either as text, an XML DOM, or
containers of forms, tables, and
links.

Resources