Apply different test scenarios without duplicating code - nightwatch.js

I have a basic test suite that is successfully running all of my tests. I've tied this into a git pre-push hook, and notice that some tests just don't make sense in that use case (i.e. testing if customer email is sent and received may take 15+ minutes).
So my question is how to organize things to only run desired tests, or omit tests when deploying. I could use tags and groups, but that seems to not be a great fit here, and could lead to code duplication (putting the same test in two or more groups).
Any tips / suggestions? (I'm still looking at tags to see if I can make them work for our use case...)

I think tags is the way you want to go. Think of tags as suites. You can add one test to multiple test suites. For example, say I have several login tests. If I want them to be in a smoke test suite AND a login suite I can just add all the tags that apply to this test.
'#tags': ['smoke', 'login']
This way you don't need to duplicate the code. You can just add as many tags as you need if they apply to this test. In the example above the tests belong to 2 different suites and I can either run the full smoke tests suite or just the login suite using the same tests.
nightwatch --tag smoke
nightwatch --tag login

Related

Order of Operations for System Testing?

I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.

AngularJS Testing: Protractor, Karma, Jasmine in a Yeoman App

I use this yeoman generator:
https://github.com/Swiip/generator-gulp-angular
It installs three testing applications: Jasmine, Karma , Protractor
According to this article (Should I be using Protractor or Karma for my end-to-end testing?), I should use: Karma for small tests of e.g. a single controller. Protactor if I want to test the whole app and simulate an user browsing through my app. According to this blog (http://andyshora.com/unit-testing-best-practices-angularjs.html) I would use Jasmine for unit testing and Karma for end-to-end integration tests.
I guess Jasmine is the language where tests are written and the other two execute the code, is that correct? Also if I never wrote a test which is more important to learn first/ to focus on?
Karma is a test-runner, so it runs your test.
Jasmine is the framework that let you write test
In my opinion in Angularjs you :
must unit-test services, because your business code is there.
should unit-test controller, because users actions are there.
may unit-test custom directives (if you plan to share that directive with others, it's a must)
Protractor is made for E2E testing (tests navigation like a real user).
It combines WebDriverJS with Jasmine and lets you write End-to-End tests (you simulate a real browser and taking real actions) with Jasmine syntax.
That kind of test is also really important in a web app.
You should not test everything, especially at the start of the project, those kinds of tests usually come with a high level of maintenance (i.e., when you change a screen you may have to change the test).
What I do is test the critical path and features.
I made a reading app, so in my case, it was login, sign up, payment, access book, and access reader.

Continuous Integration and Acceptance Test Driven Development

I have a question related to Acceptance Test Driven Development (ATDD). According to the process, I start every feature with an acceptance test (end-to-end test). I commit these tests and they are failing as expected. The problem is that I should somehow distinguish between the acceptance tests that are failing because the feature is not complete and those that are failing because of some regression. What is the best practice for organizing CI process with ATDD?
The tests that are not implemented yet should not be running in CI. The point of CI tests is to catch regressions. Catching "not done yet" problems creates a situation where red builds are "normal" and ignored. This is the worst outcome possible.
There are a lot of ways to do this, and the best will depend on your context. The simplest is to write the acceptance test first, but don't check it in until it passes (ie, you implemented the feature).

Tagging unit-tests

I'm working on a PHP project with solid unit-tests coverage.
I've noticed, that last time, I'm making very tricky manipulations with unit-tests Command-Line Test Runner' --filter command.
Here is this command's explanation from official documentation:
--filter
Only runs tests whose name matches the given pattern. The pattern can be either the name of a single test or a regular expression that matches multiple test names.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
I have heard that sometimes it is good practice to to run the whole test suite on your Continuous Integration machine, if you know for sure that you have modified only one component and 100% percent confident, that it won't fail other component's unit-tests.
What do you think about it?
Some time ago I thought that we shouldn't care so much about time require to run the whole suite of all unit-tests, but when you have very complicated business logic and unit-tests - this can take significant time.
I understand, that "real" unit-tests shouldn't interact with DB, use mock/stubs objects, I agree with that. But sometimes, it is much easier(cheaper) to use DB fixtures for the tests.
Please give me some advice, how this problem can be solved?
Good unit tests should:
Have clear methods names and variable names to act as documentation
Run fast. This will also be possible
for test with complicated business
logic. Test should run in an avarage
time of something around 0.1 second.
Test exactly one thing in one test method
Not integrate with external resources like the filesystem, email,
databases, webservices, and
everything else. You can create
seperate database integration tests
to test your database ineraction.
These test will slower then your unit
test most of the time. I put my
integration tests in a seperate
project and I run them only when I am
working on the integration code. I
also run them on all builds on the CI
server.
Be completely isolated from each other. When you have tests depending
on each other, you cannot see what
your problem is from reading which
tests are failed. You might have to
debug to find the problem. Isolated
tests will save you a lot of time.
Personally, I don't use category names in my tests. I use 2 test projects per application. One for the unit test and one for the integration tests and slower tests.
Reaction on:
"But sometimes, it is much
easier(cheaper) to use DB fixtures for
the tests."
When your code is written well, it will be easier to mock. I don't know about mocking frameworks in Php, but I use them in other languages to save me a lot of time. Writing test first and code later might help you to design your code to be testable easier.
Personally I learned to test better by
reading blogs about it
reading books about it
reading tested code written by others
writing a lot of tests of course. It took me a few thousends of tests to become good at it.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
Sure, as long as you run the full set of unit-tests occasionally (via a CI server sounds perfect)
Running the "interesting" tests regularly is better than running all the tests rarely..
I'd address the issue by having a subset of tests ("smoke tests") that take 1 minute or less that must be run before committing, then run the full set of tests from your CI server.
If your full set of tests takes > 15 minutes then I'd look to divide them and run them in parallel.
Then you can use the --filter to run the tests you're most interested in first, then the smoke tests prior to commit, and have the rest run from the CI server.

How to automate integration testing?

I'd like to know something, I know that to make your test easier you should use mock during unit testing to only test the component you want, without external dependencies. But at some point, you have to bite the bullet and test classes which interact with your database, files, network, etc.
My main question is: what do you do to test these classes?
I don't feel that installing a database on my CI server is a good practice, but do you have other options?
Should I create another server with other CI tools, with all externals dependencies?
Should I run integration test on my CI as often as my unit tests?
Maybe a full-time person should be in charge to test these components manually? (or in charge to create the test environment and configure the interaction between your class and your external dependency, like editing config files of your application)
I'd like to know how do you do in the real world.
I'd like to know how do you do in the
real world ?
In the real world there isn't a simple prescription about what to do, but there is one guiding truth: you want to catch mistake/bugs/test failures as soon as possible after they are introduced. Let that be your guide; everything else is technique.
A couple common techniques:
Tests running in parallel. This is my preference; I like to have two systems, each running their own instance of CruiseControl* (which I'm a committer for), one running the unit tests with fast feedback (< 5 minutes) while another system runs the integration tests constantly. I like this because it minimizes the delay between when a checkin happens and a system test might catch it. The downside that some people don't like is that you can end up with multiple test failures for the same checkin, both a unit test failure and an integration test failure. I don't find this a major downside in practice.
A life-cycle model where system/integration tests run only after unit tests have passed. There are tools like AnthillPro* that are built around this kind of model and the approach is very popular. In their model they take the artifacts that have passed the unit tests, deploy them to a separate staging server, and then run the system/integration tests there.
If you've more questions about this topic I'd recommend the Continuous Integration and Testing Conference (CITCON) and/or the CITCON mailing list.
There are lots of CI and build|process automation tools out there. These are just representatives of their class of tools.
The approach I've seen taken most often is to run unit tests immediately on checkin, and to run more lengthy integration tests at fixed intervals (possibly on a different server; that's really up to your preference). I've also seen integration tests split into "short-running" integration tests and "long-running" integration tests, which are run at different intervals (the "short-running" tests run every hour, for example, and the "long-running" tests run overnight).
The real goal of any automated testing is to get feedback to developers as quickly as is feasible. With that in mind, you should run integration tests as often as you possibly can. If there's a wide variance in the run length of your integration tests, you should run the quicker integration tests more often, and the slower integration tests less often. How often you run any set of tests in going to depend on how long it takes all the tests to run, and how disruptive the test runs will be to shorter-running tests (including unit tests).
I realize this doesn't answer your entire question, but I hope it gives you some ideas about the scheduling part.
Depending on the actual nature of the integration tests I'd recommend using an embedded database engine which is recreated at least once before any run. This enables tests of different commits to work in parallel and provides a well defined starting point for the tests.
Network services - by definition - can also be installed somewhere else.
Always be very careful though, to keep your CI machine separated from any dev or prod environments.
I do not know what kind of platform you're on, but I use Java. Where I work, we create integration tests in JUnit and inject the proper dependencies using a DI container like Spring. They are run against a real data source, both by the developers themselves (normally a small subset) and the CI server.
How often you run the integration tests depends on how long they take to run, in my opinion. Run them as often as you can. Leave the real person out of this, and let him or her run manual system tests in areas that are difficult or too expensive to automate testing for (for instance: spelling, position of different GUI components). Leave the editing of config files to a machine. Where I work, we have system variables (DEV; TEST and so on) set on the computers, and let the app choose a config file based on that.

Resources