VS2010 Coded UI Test vs Web Performance Tests - visual-studio-2010

I know this question looks a lot like this one, but I don't have enough rep points to comment seeking further clarification VS2010 Coded UI Tests vs. Web Performance test (Whats the difference??)
Tom E. gives a great explanation, but I'm still a bit vague on one point. I see why Coded UI tests cannot be replaced by Web Performance tests (the extra resources needed for a browser interface) but why can Web Performance tests not replace Coded UI tests?
If you record a webperf test and generate the code from that, couldn't you add validation and extraction rules (inspecting the DOM) to achieve the same result as a Coded UI test without the overhead of the browser?
I realize that this wouldn't be exactly the same as testing in different browsers, but is there a reason this wouldn't at least test whether you're receiving the appropriate response from the server?
Thanks!

Dave, good point. I think you would see the difference fairly quickly if you were trying to build an automated functional test suite (think 500 tests or more) with VS web performance tests and having to parse the DOM for querying and interacting with the application. You would essentially be writing your own Coded UI test playback mechanism. You could do it without the Coded UI test functionality, but it would be quite painful. The level of pain would be dependent on how many test cases you need to automate and how many screens there are in your app and how complex the interactions are.

Related

What is the difference between using `react-testing-library` and `cypress`?

So, react-testing-library is used for unit/integration testing, and cypress is used for e2e testing. However, both appear to do the same thing:
react-testing-library
Facilitates mocking
Tests as a user would
Starts with the top-level component (not a hard and fast requirement, but if you don't you end up with a bunch of duplicate test cases in your sub-component testing)
Instant feedback, fast
cypress
Facilitates mocking
Tests as a user would
Starts with the top-level component (the page)
Delayed feedback, slow, but provides extra tooling (video proof, stepping through tests, etc.)
Aside from the feedback cycle, they appear to be almost identical. Can somebody clarify what the differences are? Why would you want to use both?
You've answered your question in the first line. If you want to test your React app end-to-end, connected to APIs and deployed somewhere, you'd use Cypress.
react-testing-library's aimed at a lower level of your app, making sure your components work as expected. With Cypress, your app might be deployed on an environment behind CDNs, using caching, and its data could come from an API. In Cypress you'd also write an end-to-end journey, a happy path through your app that might give you extra confidence once you've deployed.
Here's an article from Kent C. Dodds answering your question.
My personal take is that the best bang for the buck tests is Cypress E2E tests. This is especially true if you are new to testing and/or at the beginning of a project.
As the project grows, there will come a point where it makes sense to add some extra safety nets with integration tests to make sure certain complex parts of your frontend work. RTL is a better tool for these, runs faster and you can get more granular with it.
And finally, rarely, when you have some specific non-trivial logic, e.g. some complicated transforming of some complicated API data, then unit it.
Also RTL plays really nicely with Mock Service Worker and Storybook. You might not feel it's worth it for either of these tools alone, but put them together and wow you have an amazing, robust testing/documenting/sandboxing system in place!

Is there a workable approach to use test-driven development (TDD) in a COBOL application?

Has anyone come across any workable approaches to implementing test-driven development (and potentially behavior-driven development) in/for COBOL applications?
An ideal solution would enable both unit and integration testing of both transactional (CICS) and batch-mode COBOL code, sitting atop the usual combination of DB2 databases and various fixed width datasets.
I've seen http://sites.google.com/site/cobolunit/, and it looks interesting. Has anyone seen this working in anger? Did it work? What were the gotchas?
Just to get your creative juices flowing, some 'requirements' for an ideal approach:
Must allow an integration test to exercise an entire COBOL program.
Must allow tests to self-certify their results (i.e. make assertions a la xUnit)
Must support both batch mode and CICS COBOL.
Should allow a unit test to exercise individual paragraphs within a COBOL program by manipulating working storage before/after invoking the code under test.
Should provide an ability to automatically execute a series of tests (suite) and report on the overall result.
Should support the use of test data fixtures that are set up before a test and torn down afterwards.
Should cleanly separate test from production code.
Should offer a typical test to production code ratio of circa 1:1 (i.e., writing tests shouldn't multiply the amount of code written by so much that the overall cost of maintenance goes up instead of down)
Should not require COBOL developers to learn another programming language, unless this conflicts directly with the above requirement.
Could support code coverage reporting.
Could encourage that adoption of different design patterns within the code itself in order to make code easier to test.
Comments welcome on the validity/appropriateness of the above requirements.
Just a reminder that what I'm looking for here is good practical advice on the best way of achieving these kinds of things - I'm not necessarily expecting a pre-packaged solution. I'd be happy with an example of where someone has successfully used TDD in COBOL, together with some guidance and gotchas on what works and what doesn't.
Maybe check out QA Hiperstation. It could cost a lot though (just like every other mainframe product).
It only used it briefly a long time ago, so I cannot claim to be an expert. I used it to run and verify a battery of regression tests in a COBOL/CICS/DB2/MQ-SERIES type environment and found it to be quite effective and flexible.
I would say this could be one of the pieces of your puzzle, but certainly not the whole thing.
Regardless of how you build/run unit tests, you likely need a summary of how well the tests are doing and how well tested the resulting software is.
See our SD COBOL Test Coverage tool, specifically designed for IBM COBOL.
This answer may not be as easy as you (and I) had hoped.
I have heard about COBOLunit before, but I also don't think it's currently being maintained.
Our team develops an enterprise software product for managing Auto/Truck/Ag dealerships the vast majority of which is in AcuCOBOL.
We were able to break some ground in possibly using JUnit (unit testing for Java) to execute and evaluate COBOL unit tests.
This required a custom test adapter that can serve as the piping and wiring for data between the COBOL unit tests and the JUnit framework. In the application to be tested we will then need to add/design hooks that will evaluate the input as test case data, perform the test to which the data relates, and report results to the adapter.
We are at the beginning of this experiment and haven't gotten much past the "it's possible" phase into "it's valuable". The first foreseeable snag (which I think exists in all TDD) is how to build harnesses into the program.

BDD-testing using a UI driver (e.g. Selenium for a web-application)

Can BDD (Behavior Driven Design) tests be implemented using a UI driver?
For example, given a web application, instead of:
Writing tests for the back-end, and then more tests in Javascript for the front-end
Should I:
Write the tests as Selenium macros, which simulate mouse-clicks, etc in the actual browser?
The advantages I see in doing it this way are:
The tests are written in one language, rather than several
They're focussed on the UI, which gets developers thinking outside-in
They run in the real execution environment (the browser), which allows us to
Test different browsers
Test different servers
Get insight into real-world performance
Thoughts?
We've done this for a C# application using a WPF testing tool (WipFlash) and writing NUnit tests in a BDD-like fashion.
e.g.
Given.TheApplicationWindowIsOpen();
When.I.Press.OKButton();
The.Price.ShouldBeCalculated();
We had to code a lot of the DSL ourselves, needless to say. But it becomes a business/customer readable solution.
Try using SpecFlow with WatiN: (I'm not sure if you're using .NET here)
http://msdn.microsoft.com/en-us/magazine/gg490346.aspx
For web testing, you could try WebDriver. The Selenium team are busy integrating WebDriver at the moment. Simon Stewart from Google, who created WebDriver, blogged here about how it works differently to Selenium.
WebDriver uses different technologies for each browser. For Internet Explorer, WebDriver uses Microsoft's UI automation - the same technology on which WipFlash which #Brian Agnew mentioned is based. This is as close as you'll get to pretending to click buttons. Simon's blog shows why this approach can be more powerful than Selenium's Javascript solution.
WebDriver is available from the Selenium site but hasn't been fully implemented as part of Selenium yet.
For BDD, and any use-case driven tests, it is important to be able to communicate what a test is doing. The problem with many test suites is that post-writting nobody is quite certain exactly what the test is doing. This will come up very often if you write in a non-specialized language. Specialization doesn't necessarily mean a special language, but just enough of an abstraction in the one language so it is clear what is happening.
For example, a lot of tests have code that looks like this (pseudo-code, I won't pick on any particular framework):
object = createBrowser()
response = object.gotoURL( "http://someurl.com" );
element = response.getLink( "Click Here" );
response = element.doClick();
This is hard for somebody to quickly translate to a business driver (product manager perhaps, or user). Instead you want to create specialized functions, or a language if you're adventurous, so you can have this:
GotoURL http://someurl.com/
Click link:Click Here
Selenium, and its macros or interface, are still fairly low-level in this regards. If you do use them then at least build some wrappers around them.
You can of course also use a product called TestPlan. It has Selenium in the back-end and exposes a high-level API, and a custom langauge for testing. It also goes beyond just the web to included Email, FTP, etc. The sample language above is a TestPlan snippet
You can certainly do some of your acceptance tests this way, but I think most BDD advocates would not advise using this for all tests. And of course, true BDD advocates wouldn't call them tests...
The RSpec Book advocates a two-level cycle with acceptance tests (or Scenarios) written first (primarily in Cucumber), and unit tests written (in RSpec) in an inner cycle more resembling the traditional TDD.
The outer cycle of acceptance testing can also use tools like Selenium to drive the entire application through the UI (and the authors of The RSpec Book spend a chapter on this). But it is not appropriate for unit tests.
Tests exercising the entire application through the UI are harder to make repeatable, and have a tendency to be slower and more fragile than unit tests.
Actually you could do both - make a user-centric Driver interface (agnostic of GUI / tech / impl). You could then write a UIDriver and a APIDriver and choose a driver to run a specific test. Running through the UI is usually slower (out of proc, control repaints but somehow creates a higher level of confidence initially). Running through the API is much faster (in proc, easy setup-teardown).
The trick here is to separate the What from the How. Otherwise you will end up with ObscureTests and high test maintenance. Ensure the primary focus on testing and not automation.

I know I may not write production code until I have written a failing unit test, so can I tell my manager I cannot write UIs? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been using TDD for server-side development. I'm not really sure if the benefits of having all of my production code surrounded by unit tests outweigh the disadvantage of spending 4x more time than needed on refactoring.
But when I'm developing UI code, I simply cannot apply TDD. To all fundamentalists out there, the first law of TDD states that "you may not write production code until you have written a failing unit test". But how can this be if you are developing UIs?
(One can use acceptance test frameworks like Selenium, but that doesn't count, because you don't interact directly with the source code.)
So, can I tell my manager that because of the new >90% code coverage policy I cannot write user interface code?
If you find that writing TDD causes you to spend 4x more time on refactoring, you need to be writing better, more isolated tests, and really let the tests drive the design, as intended. You are also not counting the time you spend in the debugger when you refactor without tests, not to mention how much time everyone else spends on bugs you introduce when you refactor.
Anyway, here is some good advice about what TDD means for UI development. How much that will translate into code coverage depends heavily on the UI framework.
Definitely don't tell your manager you can't do it, he may just replace you with someone who can.
First off, even Robert Martin has testing challenges with UIs.
When TDDing a UI, you write "behavioral contracts" that get as close to the action as possible. Ideally that means unit tests. But some UI frameworks make that inordinately difficult, requiring that you step back and use integration or "acceptance" tests to capture how you expect the UI to behave.
Does it not count if you can't use unit tests? That depends on which rules you're using to keep score. The "only unit tests count" rule is a good one for beginners to try to live with, in the same vein as "don't split infinitives" or "avoid the passive voice". Eventually, you learn where the boundaries of that rule are. In one podcast, Kent Beck talks about using combinations of unit and integration tests, as appropriate (adding, if I recall correctly, that it doesn't bother him).
And if TDD is your goal, you can most certainly write Selenium tests first, though that can be a slow way to proceed. I've worked on several projects that have used Selenium RC to great effect (and great pain, because the tests run so slowly).
Whatever your framework, you can Google around for TDD tips from people who've fought the same battles.
TDD is about testing methods in isolation. If you want to test your UI you are doing integration tests and not unit tests. So if you carefully separate the concerns in your application you will be able to successfully apply TDD to ANY kind of project.
That policy sounds a little artificial, but I would agree with the answer that UIs require functional test cases, not unit test. I disagree however with the point about which comes first. I've worked in an environment where the UI functional tests had to be written before the UI was developed and found it to work extremely well. Of course, this assumes that you do some design work up front too. As long as the test case author and the developer agree on the design it's possible for someone to write the test cases before you start coding; then your code has to make all the test cases pass. Same basic principle but it doesn't follow the law to the letter.
Unit tests are inappropriate for UI code. Functional tests are used to test the UI, but you cannot feasibly write those first. You should check with your manager to see if the >90% code coverage policy covers UI code as well. If it does, he should probably seriously rethink that move.
Separate the business logic from the UI and ensure that the UI code takes less than 10% of the total? Separation of concerns is the main goal of TDD, so that's actually a good thing.
As far as 90% coverage goes ... well, best course is to review extant literature (I'd focus on Kent Beck and Bob Martin), and I think you'll find support for not following a mindless coverage percentage (in fact, I think Uncle Bob wrote a blog post on this recently).
A having a >90% code coverage is dumb because a smart dev can get 100% coverage in one test. ;)
If you are using WPF, you can test your UI code if you use the MVVM pattern. By test your UI code, i mean you can test the ModleView but there is nothing that I know that can test XAML.
Read Phlip's book

How to deal with long running Unit Tests?

I've got about 100 unit tests and with a coverage of %20, which I'm trying to increase the coverage and also this is a project in development so keep adding new tests.
Currently running my tests after every build is not feasible they takes about 2 moments.
Test Includes:
File read from the test folders (data-driven style to simulate some HTTP stuff)
Doing actual HTTP requests to a local web-server (this is a huge pain to mock, so I won't)
Not all of them unit-tests but there are also quite complicated Multithreaded classes which need to be tested and I do test the overall behaviour of the test. Which can be considered as Functional Testing but need to be run every time as well.
Most of the functionality requires reading HTTP, doing TCP etc. I can't change them because that's the whole idea of project if I change these tests it will be pointless to test stuff.
Also I don't think I have the fastest tools to run unit tests. My current setup uses VS TS with Gallio and nUnit as framework. I think VS TS + Gallio is a bit slower than others as well.
What would you recommend me to fix this problem? I want to run unit-tests after every little bit changes btu currently this problem is interrupting my flow.
Further Clarification Edit:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
Further Clarification:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
These don't sound like unit tests to me, but more like functional tests. That's fine, automating functional testing is good, but it's pretty common for functional tests to be slow. They're testing the whole system (or large pieces of it).
Unit tests tend to be fast because they're testing one thing in isolation from everything else. If you can't test things in isolation from everything else, you should consider that a warning sign that you code is too tightly coupled.
Can you tell which tests you have which are unit tests (testing 1 thing only) vs. functional tests (testing 2 or more things at the same time)? Which ones are fast and which ones are slow?
You could split your tests into two groups, one for short tests and one for long-running tests, and run the long-running tests less frequently while running the short tests after every change. Other than that, mocking the responses from the webserver and other requests your application makes would lead to a shorter test-run.
I would recommend a combined approach to your problem:
Frequently run a subset of the tests that are close to the code you make changes to (for example tests from the same package, module or similar). Less frequently run tests that are farther removed from the code you are currently working on.
Split your suite in at least two: fast running and slow running tests. Run the fast running tests more often.
Consider having some of the less likely to fail tests only be executed by an automated continues integration server.
Learn techniques to improve the performance of your tests. Most importantly by replacing access to slow system resources by faster fakes. For example, use in memory streams instead of files. Stub/mock the http access. etc.
Learn how to use low risk dependency breaking techniques, like those listed in the (very highly recommended) book "Working Effectively With Legacy Code". These allow you to effectively make your code more testable without applying high risk refactorings (often by temporarily making the actual design worse, like breaking encapsulation, until you can refactor to a better design with the safety net of tests).
One of the most important things I learned from the book mentioned above: there is no magic, working with legacy code is pain, and always will be pain. All you can do is accept that fact, and do your best to slowly work your way out of the mess.
First, those are not unit tests.
There isn't much of a point running functional tests like that after every small change. After a sizable change you will want to run your functional tests.
Second,don't be afraid to mock the Http part of the application. If you really want to unit test the application its a MUST. If your not willing to do that, you are going to waste a lot more time trying to test your actual logic, waiting for HTTP requests to come back and trying to set up the data.
I would keep your integration level tests, but strive to create real unit tests. This will solve your speed problems. Real unit tests do not have DB interaction, or HTTP interaction.
I always use a category for "LongTest". Those test are executed every night and not during the day. This way you can cut your waiting time by a lot. Try it : category your unit testing.
It sounds like you may need to manage expectations amongst the development team as well.
I assume that people are doing several builds per day and are epxected to run tests after each build. You might we be well served to switch your testing schedule to run a build with tests during lunch and then another over night.
I agree with Brad that these sound like functional tests. If you can pull the code apart that would be great, but until then i'd switch to less frequent testing.

Resources