Test Automation Framework - user-interface

I was wondering what would be a good UI to specify test cases.
Currently we use macros with excel to specify our test cases and generate an xml out of it and export it to the script generator.
Excel is good and really flexible and allows testers to enter their test cases very quickly.
However the xml generated is sometimes not well formed and the system has a huge learning curve.
I want to change the UI from excel to something else that would allow testers to enter test cases quickly and provide flexibility.

A nice TDD tool is SLIM/FitNesse. It is a wiki system which allows to enter special tables and/or commands which trigger test methods. These test methods can be written in Java and .NET (other languages might be supported). Also there are various plug-ins for doing DB testing or Selenium web tests. Here is a first tutorial video.

I've used Test Link for this sort of task. It's an opensource php project.

You might check out Fitnesse, which does a similar thing. http://fitnesse.org/

Related

Can I do tooled refactoring for UFT script code, especially when changing function signatures?

As an enthusiasting refactorer, there's an IntelliJ feature that I love: "Refactor --> Change signature".
Basically, you have a function and you can decide to remove a parameter or add a new one, setting a default value. This is so convenient, so beautiful, and I dearly love it.
So when I got involved in an oldschool UFT project with maintenance tasks, I felt jaded.
It there a way to achieve this without changing each and every instance of the function? Please tell me yes. Please!
Well, no. I don´t know of any tool capable of this.
There seem to be people who created a C# adapter for the UFT test object API, enabling them to write their tests in C#, and to use VisualStudio for development of test scripts. In VS, you have the refactoring support you look for. But you don´t create UFT scripts anymore, you´d create C# apps. (Note I am not talking about the API testing aspect of UFT, which uses C# anyways -- I am talking about the VBScript test scripts for GUI tests and BPT components.)
UFT itself is not capable of doing real static code analysis. (Let this statement drown a minute, and you´ll agree: it´s true.)
Adding this to the fact that the UFT´s IDE is, let´s say: sub-optimal, this led to the development of Test Design Studio (TDS), a VisualStudio "feel-alike" subset of VS for UFT (VBScript) scripts. You can check it out here: http://www.patterson-consulting.net/products/test_design_studio/Default.aspx
Among other things, TDS does static code analysis for UFT scripts in a pretty complete way (as far as an interpreted variant-typed language like VBScript allows that at all), and the author of the tool seems to be thinking about adding refactoring features like the one you asked for, but -- this has not happened yet. It will probably come only if demand is high.
Until then, TDS could help you:
You could simply change the signature
If TDS knows all calls (which is usually does), it will list you all locations where you need to edit -- and this happens at design-time, not at runtime
TDS allows you to specify the type of identifiers, for example: formal parameters, variables, and so on. This means you might even get warnings if you change nothing about the pure VBScript signature (which does not include type information), but do change the TDS directive of that signature parameter of which you changed the type.
This is no advertisement. I am not part of the company that developed TDS.
This is just an honest answer to the (slightly offtopic) question that I wish would have gotten years ago, asking questions like yours, and it proved to be a real lifesaver.
In summary, TDS quadrupled (or more) my productivity when creating and maintaining test scripts, especially if a large base framework is used. So I´d recommend checking out the option of using TDS to better handle changes like the ones you outlined.

Sharing set up and tear down methods across multiple selenium projects

So, I have multiple selenium test projects in separate VS solutions and they all have the same Setup() methods which are executed before scenarios are run as well as all having the same TearDown() methods for after. Currently if a change is required for these methods, they have to be updated separately so I was looking in to centralising these methods to be used in all of the test projects/solutions.
I'm relatively new so apologise in advance but does anyone have experience of this with suggestions on approaches I could take? Is it even possible? My tests do not currently run in parallel so is this something I'd need to look in to?
How if you change code be
[OneTimeSetUp] and [OneTimeTearDown]?
in my opinion you should create one class for setup include setup and tear down then in yor test add setup like this public class HREmployeeList:Setup
Hopefully help or not just check this below link
http://toolsqa.com/selenium-webdriver/c-sharp/how-to-write-selenium-test-using-nunit-framework/
One thing you could do is create a new project that contains the functions for setup and teardown and then include that compiled dll in all the other projects. If you need to make a change to setup/teardown, you make the change and compile a new dll and the code change is passed to all the other projects.
Keeping long story short - you need to create a Meta framework.
By concept Meta frameworks provide a method of solving the problem with automating multiple pieces as part of a larger automation strategy. Testers define independent utility classes which can be generically used with any automation tool and can be reused between different automation projects as well. The framework provides an abstraction layer that allows the separate automation pieces to be executed and have their results reported back in a standardized way.
I have a post on the topic, so feel free to take what you need form there.
Since you've tagged VisualStudio, I'll share first the approach my team did use for sharing common functionality across Test projects. What you need is a private Nuget server. Each team compiles and supports a Nuget pack based on the service it provides. For example Selenium code, API calls etc.
Next and probably closer to your case solution would be to utilize git submodules and share the Test harness engine between your projects.
Both approaches will benefit of a Fixture Setup Patterns like Shared Fixture Construction.

GUI testing coverage

I have two questions. My first question is: Do applications exist which measure the coverage of GUI testing for web applications (not code, but the coverage of GUI components on web page)?
My second question is:
Is GUI testing with Selenium for example necessary if we have tests for javascript as well?
Thank you in advance.
You can write your custom application to find all dom elements using http://www.w3schools.com/js/js_htmldom_elements.asp, store this in some place and after completing your test automation framework run this utility to make sure that none of the elements are missing.
GUI test is required to make sure that all your integration points b/w several backend API are working. Also we will be sure that non of the UI elements are break over UI and all your business use cases are working as expected. Mostly UI testing is done for Acceptance Testing and we can show to the customer that all there use cases are working as expected. Later in the next release you can make sure that you are not breaking any UI code. UI testing gives us confidence while releasing to end users.

Is there a YAML test suite?

A comprehensive test suite would be a valuable tool to have, especially when evaluating all of the variant parsers out there. Does such a beast exist?
In a perfect world, I imagine it would have different sections for different versions of the YAML spec...
There is currently one in the making, see here.
We also generate a result matrix for YAML implementations which
we know of
are written in a language we know so we can implement adapter code to validate the test suite against it
Full disclosure: I am the author of NimYAML and some of the test cases and adapters.

Sharing Specflow Feature Files with Multiple Applications

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

Resources