I've got about 100 unit tests and with a coverage of %20, which I'm trying to increase the coverage and also this is a project in development so keep adding new tests.
Currently running my tests after every build is not feasible they takes about 2 moments.
Test Includes:
File read from the test folders (data-driven style to simulate some HTTP stuff)
Doing actual HTTP requests to a local web-server (this is a huge pain to mock, so I won't)
Not all of them unit-tests but there are also quite complicated Multithreaded classes which need to be tested and I do test the overall behaviour of the test. Which can be considered as Functional Testing but need to be run every time as well.
Most of the functionality requires reading HTTP, doing TCP etc. I can't change them because that's the whole idea of project if I change these tests it will be pointless to test stuff.
Also I don't think I have the fastest tools to run unit tests. My current setup uses VS TS with Gallio and nUnit as framework. I think VS TS + Gallio is a bit slower than others as well.
What would you recommend me to fix this problem? I want to run unit-tests after every little bit changes btu currently this problem is interrupting my flow.
Further Clarification Edit:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
Further Clarification:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
These don't sound like unit tests to me, but more like functional tests. That's fine, automating functional testing is good, but it's pretty common for functional tests to be slow. They're testing the whole system (or large pieces of it).
Unit tests tend to be fast because they're testing one thing in isolation from everything else. If you can't test things in isolation from everything else, you should consider that a warning sign that you code is too tightly coupled.
Can you tell which tests you have which are unit tests (testing 1 thing only) vs. functional tests (testing 2 or more things at the same time)? Which ones are fast and which ones are slow?
You could split your tests into two groups, one for short tests and one for long-running tests, and run the long-running tests less frequently while running the short tests after every change. Other than that, mocking the responses from the webserver and other requests your application makes would lead to a shorter test-run.
I would recommend a combined approach to your problem:
Frequently run a subset of the tests that are close to the code you make changes to (for example tests from the same package, module or similar). Less frequently run tests that are farther removed from the code you are currently working on.
Split your suite in at least two: fast running and slow running tests. Run the fast running tests more often.
Consider having some of the less likely to fail tests only be executed by an automated continues integration server.
Learn techniques to improve the performance of your tests. Most importantly by replacing access to slow system resources by faster fakes. For example, use in memory streams instead of files. Stub/mock the http access. etc.
Learn how to use low risk dependency breaking techniques, like those listed in the (very highly recommended) book "Working Effectively With Legacy Code". These allow you to effectively make your code more testable without applying high risk refactorings (often by temporarily making the actual design worse, like breaking encapsulation, until you can refactor to a better design with the safety net of tests).
One of the most important things I learned from the book mentioned above: there is no magic, working with legacy code is pain, and always will be pain. All you can do is accept that fact, and do your best to slowly work your way out of the mess.
First, those are not unit tests.
There isn't much of a point running functional tests like that after every small change. After a sizable change you will want to run your functional tests.
Second,don't be afraid to mock the Http part of the application. If you really want to unit test the application its a MUST. If your not willing to do that, you are going to waste a lot more time trying to test your actual logic, waiting for HTTP requests to come back and trying to set up the data.
I would keep your integration level tests, but strive to create real unit tests. This will solve your speed problems. Real unit tests do not have DB interaction, or HTTP interaction.
I always use a category for "LongTest". Those test are executed every night and not during the day. This way you can cut your waiting time by a lot. Try it : category your unit testing.
It sounds like you may need to manage expectations amongst the development team as well.
I assume that people are doing several builds per day and are epxected to run tests after each build. You might we be well served to switch your testing schedule to run a build with tests during lunch and then another over night.
I agree with Brad that these sound like functional tests. If you can pull the code apart that would be great, but until then i'd switch to less frequent testing.
Related
So, react-testing-library is used for unit/integration testing, and cypress is used for e2e testing. However, both appear to do the same thing:
react-testing-library
Facilitates mocking
Tests as a user would
Starts with the top-level component (not a hard and fast requirement, but if you don't you end up with a bunch of duplicate test cases in your sub-component testing)
Instant feedback, fast
cypress
Facilitates mocking
Tests as a user would
Starts with the top-level component (the page)
Delayed feedback, slow, but provides extra tooling (video proof, stepping through tests, etc.)
Aside from the feedback cycle, they appear to be almost identical. Can somebody clarify what the differences are? Why would you want to use both?
You've answered your question in the first line. If you want to test your React app end-to-end, connected to APIs and deployed somewhere, you'd use Cypress.
react-testing-library's aimed at a lower level of your app, making sure your components work as expected. With Cypress, your app might be deployed on an environment behind CDNs, using caching, and its data could come from an API. In Cypress you'd also write an end-to-end journey, a happy path through your app that might give you extra confidence once you've deployed.
Here's an article from Kent C. Dodds answering your question.
My personal take is that the best bang for the buck tests is Cypress E2E tests. This is especially true if you are new to testing and/or at the beginning of a project.
As the project grows, there will come a point where it makes sense to add some extra safety nets with integration tests to make sure certain complex parts of your frontend work. RTL is a better tool for these, runs faster and you can get more granular with it.
And finally, rarely, when you have some specific non-trivial logic, e.g. some complicated transforming of some complicated API data, then unit it.
Also RTL plays really nicely with Mock Service Worker and Storybook. You might not feel it's worth it for either of these tools alone, but put them together and wow you have an amazing, robust testing/documenting/sandboxing system in place!
I have a question about test driven development, which is something I am trying to learn.
I have been reviewing a project delivered by a team of my company to see what kind of stuff they include in their tests.
Basically, for every project in the solution, there is a corresponding test project. This includes the projects in the data layer. I have found that the tests in that project are actually hitting the database and making assertions based on retrieved data.
In fact, the tests of the classes in the Services layer are also hitting the database.
Is this normal in TDD?
If not, and if those classes being tested did nothing other than retrieve data, then what would be the best way of testing them?
Dare I say, should they be tested at all? If TDD helps drive out the design, arguably they should.
What do the TDD kings out there say?
Is hitting the database from most tests normal in TDD? Yes, unfortunately, it's more normal than it ought to be. That doesn't mean it's right, though.
There are various styles of TDD:
Outside-In TDD, popularized by GOOS.
Bottom-Up TDD, which is the 'older' style, where you develop 'leaf' components first.
When you use the Outside-In approach, you'd typically start with a few coarse-grained tests to flesh out the behaviour of the system. These might very well hit the a database. That's OK.
However, you can't properly test for basic correctness from the boundary of a complex application alone, because the combinatorial explosion of required test cases is prohibitive. You would literally have to write tens of thousands, or hundreds of thousands of test cases.
Therefore, even with the Outside-In approach, you should write most of the tests at the level of a unit. These tests should not hit the database.
If you're using the Bottom-Up style, you shouldn't hit the database in most cases.
In short, TDD means Test-Driven Development, not necessarily Unit Test-Driven Development, so it may be fine with a few tests hitting a database. However, such tests tend to be slow and fragile, so there should be only a few of them. The concept of the Test Pyramid nicely explains this.
If you want to learn more, you may want to watch my Pluralsight courses
Outside-In Test-Driven Development
Advanced Unit Testing
Chapter about TDD from Martin's "Clean Code" caught my imagination.
However.
These days I am mostly expanding or fixing large existing apps.
TDD, on the other hand, seems to only be working only for writing from scratch.
Talking about these large existing apps:
1. they were not writted with TDD (of course).
2. I cannot rewrite them.
3. writing comprehensive TDD-style tests for them is out of question in the timeframe.
I have not seen any mention of TDD "bootstrap" into large monolite existing app.
The problem is that most classes of these apps, in principle, work only inside the app.
They are not separable. They are not generic. Just to fire them up, you need half of the whole app, at least. Everything is connected to everything.
So, where is the bootstrap ?
Or there is alternative technique with results of TDD
that'd work for expanding the existing apps that were not developed with TDD ?
The bootstrap is to isolate the area you're working on and add tests for behavior you want to preserve and behavior you want to add. The hard part of course is making it possible, as untested code tends to be tangled together in ways that make it difficult to isolate an area of code to make it testable.
Buy Working Effectively with Legacy Code, which gives plenty of guidance on how to do exactly what you're aiming for.
You might also want to look at the answers to this related question, Adding unit tests to legacy code.
Start small. Grab a section of code that can reasonably be extracted and made into a testable class, and do it. If the application is riddled with so many hard dependencies and terrifying spaghetti logic that you can't possibly refactor without fear of breaking something, start by making a bunch of integration tests, just so you can confirm proper behavior before/after you start messing around with it.
Your existing application sounds as if it suffers from tight coupling and a large amount of technical debt. In cases such as these you can spend a LOT of time trying to write comprehensive unit tests where time may be better spent doing major refactoring, specifically promoting loose coupling.
In other cases investing time and effort into unit testing with Mocking frameworks can be beneficial as it helps decouple the application for purposes of testing, making it possible to test single components. Dependency Injection techniques can be used in conjunction with mocking to help make this happen as well.
Has anyone come across any workable approaches to implementing test-driven development (and potentially behavior-driven development) in/for COBOL applications?
An ideal solution would enable both unit and integration testing of both transactional (CICS) and batch-mode COBOL code, sitting atop the usual combination of DB2 databases and various fixed width datasets.
I've seen http://sites.google.com/site/cobolunit/, and it looks interesting. Has anyone seen this working in anger? Did it work? What were the gotchas?
Just to get your creative juices flowing, some 'requirements' for an ideal approach:
Must allow an integration test to exercise an entire COBOL program.
Must allow tests to self-certify their results (i.e. make assertions a la xUnit)
Must support both batch mode and CICS COBOL.
Should allow a unit test to exercise individual paragraphs within a COBOL program by manipulating working storage before/after invoking the code under test.
Should provide an ability to automatically execute a series of tests (suite) and report on the overall result.
Should support the use of test data fixtures that are set up before a test and torn down afterwards.
Should cleanly separate test from production code.
Should offer a typical test to production code ratio of circa 1:1 (i.e., writing tests shouldn't multiply the amount of code written by so much that the overall cost of maintenance goes up instead of down)
Should not require COBOL developers to learn another programming language, unless this conflicts directly with the above requirement.
Could support code coverage reporting.
Could encourage that adoption of different design patterns within the code itself in order to make code easier to test.
Comments welcome on the validity/appropriateness of the above requirements.
Just a reminder that what I'm looking for here is good practical advice on the best way of achieving these kinds of things - I'm not necessarily expecting a pre-packaged solution. I'd be happy with an example of where someone has successfully used TDD in COBOL, together with some guidance and gotchas on what works and what doesn't.
Maybe check out QA Hiperstation. It could cost a lot though (just like every other mainframe product).
It only used it briefly a long time ago, so I cannot claim to be an expert. I used it to run and verify a battery of regression tests in a COBOL/CICS/DB2/MQ-SERIES type environment and found it to be quite effective and flexible.
I would say this could be one of the pieces of your puzzle, but certainly not the whole thing.
Regardless of how you build/run unit tests, you likely need a summary of how well the tests are doing and how well tested the resulting software is.
See our SD COBOL Test Coverage tool, specifically designed for IBM COBOL.
This answer may not be as easy as you (and I) had hoped.
I have heard about COBOLunit before, but I also don't think it's currently being maintained.
Our team develops an enterprise software product for managing Auto/Truck/Ag dealerships the vast majority of which is in AcuCOBOL.
We were able to break some ground in possibly using JUnit (unit testing for Java) to execute and evaluate COBOL unit tests.
This required a custom test adapter that can serve as the piping and wiring for data between the COBOL unit tests and the JUnit framework. In the application to be tested we will then need to add/design hooks that will evaluate the input as test case data, perform the test to which the data relates, and report results to the adapter.
We are at the beginning of this experiment and haven't gotten much past the "it's possible" phase into "it's valuable". The first foreseeable snag (which I think exists in all TDD) is how to build harnesses into the program.
I know this question looks a lot like this one, but I don't have enough rep points to comment seeking further clarification VS2010 Coded UI Tests vs. Web Performance test (Whats the difference??)
Tom E. gives a great explanation, but I'm still a bit vague on one point. I see why Coded UI tests cannot be replaced by Web Performance tests (the extra resources needed for a browser interface) but why can Web Performance tests not replace Coded UI tests?
If you record a webperf test and generate the code from that, couldn't you add validation and extraction rules (inspecting the DOM) to achieve the same result as a Coded UI test without the overhead of the browser?
I realize that this wouldn't be exactly the same as testing in different browsers, but is there a reason this wouldn't at least test whether you're receiving the appropriate response from the server?
Thanks!
Dave, good point. I think you would see the difference fairly quickly if you were trying to build an automated functional test suite (think 500 tests or more) with VS web performance tests and having to parse the DOM for querying and interacting with the application. You would essentially be writing your own Coded UI test playback mechanism. You could do it without the Coded UI test functionality, but it would be quite painful. The level of pain would be dependent on how many test cases you need to automate and how many screens there are in your app and how complex the interactions are.