We've just moved our tests onto CircleCI, we're using mocha.
The majority of our tests are checking if the JSON object returned by our API is correct, so we're relying heavily on the output of chai-subset
I just set up the mocha-junit-reporter and to my disappointment discovered that it does not save the object diffs in the report it creates.
Is there a way to get this information into such reports?
I know I can set up the mocha-jenkins-reporter to create spec and junit reports, but seems like a bit of a waste to even have the junit reports if they don't contain the most useful debugging information.
Related
I am trying to create a custom test report using Maven as my build tool, JUnit as my framework, along with selenium test cases. I was using maven's surefire report plugin but I need to include more information in my report. Can anyone direct me to a good tutorial on how to create a custom maven reporting tool?
We have a much better test automation dashboard based on just few api calls, ARES dashboard (built under Testastra and owned by ZenQ) is a much better option to try and it's absolutely free.
ARES, is an acronym for Test Automation Results dashboard. It's a TestAutomation framework/tool agonistic solution, that simplifies the collection of Test automation results and their analysis via live dashboard, daily/weekly trends, frequent failures etc. Website: http://www.testastra.com/#ares
Below repo has some code samples, documentation and usage of ARES test automation dashboard:https://github.com/testastra/ARES
Give it a try.
For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.
I'm looking to abstract the sequence of REST calls for complicated behaviors in my company's app into a series of classes that are instantiated as needed and the methods would effectively create the sequence of HTTP request calls. It's my hope that doing this would make the tests more compact and readable (as well as providing more reusable code). I would need to utilize the StandardJmeterEngine and export the test to JMX format after the HashTree test plan is created.
To cut on development time, I'm hoping to find a nice example of this; I'm sure someone's done it, but I've yet to stumble onto it.
If you are looking into the way of programmatic creation JMeter test take a look into the following sources:
JMeter API
How to Write a plugin for JMeter
Five Ways To Launch a JMeter Test without Using the JMeter GUI
If you are looking for an example project you can check out jmeter-from-code solution which demonstrates creating a JMeter Test Plan programmatically, storing it into a .jmx script file, running it and getting the .jtl results file.
My team is currently building a WebDriver test framework in Ruby. We are looking for a way to generate test completion reports so they can be emailed out, ideally included individual test and test verification point results.
As an example of what I mean when I say test verification points, a test which creates a product could have multiple verification points such as did the product name get created correctly, did the product price get created correctly. If the test completion report could specify which verification point failed it would make assessing failures a lot quicker.
The reports that can be output from the selenium IDE are pretty much what I'm after.
Since you are using Ruby, you can use consider storing your verification points outcomes, test case status etc in a DB such as MySQL or Sqlite. This gives you the ability to perform various analysis on the health of your tests in the past and present. Based on this you can even predict the future trend.
Maybe Allure report and respective RSpec adapter could suit your requirements? This report is rather new and gives you wide range of features like grouping tests by BDD features and stories, saving attachments, parameters and so on.
We are setting up a Selenium test campaign on a big web application.
The first thing we've done was to build a framework which initialize SQL data in database before the test, launch the test, archive results and then clear data.
We've integrate that in a Maven 2 process, run every day by TeamCity on a dedicated database.
We've set up several Selenium tests now but It's not as used as planned.
Reasons are that tests are sometimes broken for other reasons than regressions (data may have changed, stored procedure may have been recompiled and so on).
I would like to know if there are big success in user interface testing and if so, reasons to that. Commons errors may also interest me.
Testability helps a lot. The biggest win for testability in web apps is if all of the HTML elements you need to interact with on the page have unique and consistent attributes. If the attributes you are using to identify the HTML elements (Selenium uses xpath) are not consistent/reliable from build-to-build, or session-to-session, your test scripts will fail. Also, these attributes must be unique, so that the automation tool (in this case Selenium) can reliably find the object on the web page.
If you want reliable unit tests, you need to have the same input. Starting state of the database is the input. So, you need to have the same starting database each time. Of course, if you wish to do testing with different input, you need to create another unit test (as results will obviously not be the same).
When I do stuff like this, I always use the same database as a starting point. Of course, some of the tests might fail without modifying the database is correct way, so some other subsequent tests might fail as well even though they wouldn't otherwise. If your unit-test tool allows, you should define dependencies between tests to make sure that those tests will not be run at all when the 'parent' one fails.
I use http-unit which has the added benefit of working before any styling has been added to the page.
http://httpunit.sourceforge.net/
You can attach the tests to run in the integration test phase for maven2.
From the site
Written in Java, HttpUnit emulates the
relevant portions of browser behavior,
including form submission, JavaScript,
basic http authentication, cookies and
automatic page redirection, and allows
Java test code to examine returned
pages either as text, an XML DOM, or
containers of forms, tables, and
links.