Sharing Specflow Feature Files with Multiple Applications - visual-studio

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.

First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)

In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

Related

Sonarqube 6.2 - multilanguage setup, show coverage per language

We do have an pretty old code base, where at the moment everything is handled within (frontend/backend) - to improve our quality, we setup a multilanguage project, now instead of analyzing just Java, we also analyse SCSS, HTML, JS, Xml,...
Well so far everything is running smooth, and working as expected, I am just curious if there is a way to show "coverage per language"? We do have a lot of Java Tests but no JavaScript tests, and it would be pretty neat, to have an overview of "how much tested" the different languages are!
There is also some kind of business value related to this! As the Coverage is now not separated into Integration and Unit tests, it is now also factoring the coverage of the JavaScript files into the overall coverage -> which we can argument easily, but we lose some kind of comparability :D
This is not available synthetically within SonarQube. If it's really important to you, you'll need to use the web services to pull the data and do the calculations externally.

GUI testing coverage

I have two questions. My first question is: Do applications exist which measure the coverage of GUI testing for web applications (not code, but the coverage of GUI components on web page)?
My second question is:
Is GUI testing with Selenium for example necessary if we have tests for javascript as well?
Thank you in advance.
You can write your custom application to find all dom elements using http://www.w3schools.com/js/js_htmldom_elements.asp, store this in some place and after completing your test automation framework run this utility to make sure that none of the elements are missing.
GUI test is required to make sure that all your integration points b/w several backend API are working. Also we will be sure that non of the UI elements are break over UI and all your business use cases are working as expected. Mostly UI testing is done for Acceptance Testing and we can show to the customer that all there use cases are working as expected. Later in the next release you can make sure that you are not breaking any UI code. UI testing gives us confidence while releasing to end users.

Logic tests for OS X using Xcode 6

I want to implement unit testing in my Xcode project, and would like to run tests without requiring the application to be started.
Reasons for this are, I have a core data based document app, that also uses a cvdisplay link to control continuous rendering in a background thread.
It strikes me that I do not need a running application to test core data datamodel functionality, this should be distinct from view stuff anyway. Also I would like to isolate and performance test my background rendering processes, something that seems very difficult with the app running, but could easily do without the application running, just getting the right classes and feed it the correct data.
I've seen other questions that have answers for Xcode versions before six, but the answers don't seem to work for the current version.
The docs now make a distinction between application and library tests. Library tests are run against library targets.
I'm not sure i want to reorganise my code into distinct libraries at the moment, and would prefer to avoid it or fake it somehow.
I saw somewhere an openradar relating to this in ios, but I'm interested in osx.
Has anyone any insight into this?
EDIT : Learning to cope with the existing setup for now, testing with full app running, I can run some checks on that, then I close all documents and shut down the display link.
I can then run tests creating my own persistent store coordinator, in memory datastore and context, as well as testing my rendering classes without fear of conflict with the other display thread.
I'm now running into troubles with linking sources, I just can't seem to get it right, I fiddle with settings, it seems to work for a bit, then suddenly stops building again with Undefined symbols for architecture x86_64: errors, either that or problems linking with 3rd party private frameworks. I look through the web, change a few things, it starts working again. Then I add some tests, importing more of my classes, things stop working again.,.. Infuriating
EDIT 2: Pretty much all sorted now, but maybe not terribly efficient. For each test case class, I either open or close documents and start or stop the display link in the +(void)setup method. I don't do anything in the +(void)tearDown, and let the setup decide how to proceed based on the current state.
Although this means it's possible to flow from one test class to another minimizing document opens and closes, there doesn't seem to be a way to order the tests so that I could group them together.
BTW, I also solved my mentioned linking troubles (XCode 6 Testing Target Troubles), not really relevant to this question though.
It sounds like you landed on the standard solution: Give your app a way to tell when it's being stood up for testing rather than use, and then have applicationDidFinishLaunching: not do any of your usual launch-time behaviors, but leave it to specific tests to provide any setup they need.
You might benefit from creating multiple test suites to deal with different expected conditions, like all the tests that work around a specific document being open.

Test Automation Framework

I was wondering what would be a good UI to specify test cases.
Currently we use macros with excel to specify our test cases and generate an xml out of it and export it to the script generator.
Excel is good and really flexible and allows testers to enter their test cases very quickly.
However the xml generated is sometimes not well formed and the system has a huge learning curve.
I want to change the UI from excel to something else that would allow testers to enter test cases quickly and provide flexibility.
A nice TDD tool is SLIM/FitNesse. It is a wiki system which allows to enter special tables and/or commands which trigger test methods. These test methods can be written in Java and .NET (other languages might be supported). Also there are various plug-ins for doing DB testing or Selenium web tests. Here is a first tutorial video.
I've used Test Link for this sort of task. It's an opensource php project.
You might check out Fitnesse, which does a similar thing. http://fitnesse.org/

NUnit best practice

Environment: (C# WinForms application in Visual Studio Professional 2008)
I've been digging around a little for guidance on NUnit best practices. As a solo programmer working in a relatively isolated environment I'm hoping that collective wisdom here can help me.
Scott White has a few good starting points here but I'm not sure I totally agree with everything he's said -- particularly point 2. My instincts tell me that the closer a test is to the code being tested the more likely you are to get complete test coverage. In the comments to Scott's blog posting is a remark that just testing the public interface is considered best practice by some, but I would argue the test framework is not a typical class consumer.
What can you recommend as best practices for NUnit?
If by point 2, you mean the "bin folder per solution" -- I can see your point. Personally, I would simply add the reference to each test project. If, on the other hand, you really mean (1b) "don't put your tests in the same assembly as your code" I heartily agree with him and disagree with you. Your tests should be distinct from your production code in order to enhance code clarity and organization. Keeping your test classes separate helps the next programmer understand it more easily. If you need access to internals in your tests -- and you might since internal methods are "public" to the assembly, you can use the InternalsVisibleTo construct in the Assembly.cs file.
I, too, would recommend that, in general, it is sufficient to unit test only the public interface of the code. Done properly (using TDD), the private methods of your code will simply be refactorings of previous public code and will have sufficient test coverage through the public methods. Of course, this is a guideline not a law so there will be times that you might want to test a private method. In those instances, you can create an accessor and use reflection to invoke the private method.
Another recommendation that I would make is to use unit testing and code coverage in tandem. Code coverage can be a useful heuristic to identify when you need more tests. Lack of coverage should be used as a guide to indicate where more testing may be needed. This isn't to say that you need 100% coverage -- some code may be simple enough not to warrant a unit test (automatic properties, for instance) and they may not be touched by your existing tests.
There were a couple of issues that I had with the article. Probably the biggest is the lack of abstraction away from the database for unit tests. There probably are some integration tests that need to go against the db -- perhaps when testing trigger or constraint functionality if you can't convince yourself of their correctness otherwise. In general, though, I'm of the opinion that you should implement your data access as interfaces, then mock out the actual implementations in your unit tests so that there is no need to actually connect to the database. I find that my tests run faster, and thus I run them more often when I do this. Building up a "fake" database interface might take a little while but can be reused as long as you stick with the same design pattern for your data access.
Lastly, I would recommend using nUnit with TestDriven.Net - a very useful plugin whether you're doing nUnit or MSTest. Makes it very handy to run or debug tests with a right-click context menu.
My instincts tell me that the closer a
test is to the code being tested the
more likely you are to get complete
test coverage. In the comments to
Scott's blog posting is a remark that
just testing the public interface is
considered best practice by some, but
I would argue the test framework is
not a typical class consumer.
If your code cannot be tested using only public entry points, then you have a design problem. You should read more about TDD and SOLID principles (especially single responsibility principle and dependency inversion). Then you will understand that following TDD approach will help you write more testable, flexible and maintainable code, without the need for using such "hacks" as testing classes' private parts.
I also highly recommend reading Google's guide to testability by Miško Hevery, it has plenty of code samples which cover these topics.
I'm in a fairly similar situation and this question describes what I do keep-your-source-close-and-your-unit-tests-closer. There weren't too many others enamoured with my approach but it works perfectly for me.

Resources