Why we need `afterEach(cleanup);`? - nrwl

This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!

Related

How can I produce github annotations by creating report files on disk?

I am trying to find a portable way to produce code annotations for GitHub in a way that would avoid a vendor-lockin.
Mainly I want to dump annotations inside a file (yaml, json,...) during build process and have a task at the end that does transform this file into github annotations.
The main goal here is to avoid hardcoding support for github-annotation into the tools that produce them, so other CI/CD systems could also consume the annotation-reports and display them in their UI.
linters -> annotations.report -> github-upload
Tools like flake8 are able to produce output in parsable format file:line:column: message, but I need to know if there is any attempt to standardize annotations so we can collect and combine them from multiple tools and feed them to the CI/CD engine.
Today I googled up what the heck those "Github Action Annotations" are all, and this was among the hits:
https://github.com/marketplace/actions/annotations-action
GitHub action for creating annotations from JSON file
As of now that page also contains:
This repository uses npm packages from #attest scope on github; we are working hard to open source these packages.
Annotations Action is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.
I didn't try it, again, just a random google hit.
I am currently using https://github.com/yuzutech/annotations-action
Sample action code:
- name: Annotate
uses: yuzutech/annotations-action#v0.3.0
with:
repo-token: ${{secrets.GITHUB_TOKEN}}
input: ./annotations.json
title: 'Findings'
ignore-missing-file: true
It does its job well but with one minor defect. If you have a findings on a commit/PR you get to see the finding with a beautiful annotation right where you need it. If you re-push changes, even if the finding persists, the annotation is not displayed on later commits. I have opened an issue but I have not yet received an answer.
The annotations-action mentioned above has not been updated and it does not work with me at all (deprecated calls).
I haven't found anything else that worked exactly as I wanted it to.
Update: I found that you can use reviewdog to annotate based on findings. I also created a GitHub action that can be used for Static Code Analysis here https://github.com/tsigouris007/action-semgrep-reviewdog. You can visit the entrypoint.sh file and check how I piped the custom output to reviewdog utilizing jq.

How to test Capybara?

I have a script that use Capybara to publish links in Google+. I would like to have tests to cover this functionality. Usually Capybara is using as a tool for writing Integration tests. In may case i need to test Capybara itself.
I see 3 possible ways:
stub capybara's method (but in this case i test nothing but just stubbed methods)
test capybara agains saved HTML/JS page (that will help me understand that i did not break anything during refactoring)
do not test at all (no comments here)
Have you ever faced such a problem?
If you register different drivers for your app and your test code, possibly manage the sessions manually depending on how you're using it in your app, and make sure you're careful with Capybaras setting's you should be able to go with option 2. You have to be careful with Capybaras settings because most of them are global so changing them for your tests will also change them for your app.

One set of tests for few projects with different parameters

i'm using Protractor and Jasmine and would like to organize my E2E test in the best way.
Example:
There is a set of the tests for check registration function (registration with right credentials, register as existed user, etc.).
I need to run those tests in three different projects. Tests are same, but credentials are different. For one project it could be 3 fields in the registration form, in another one - 6.
Now everything is organized in a very complicated way:
each single test is made not as "it" but as a function
there is a function which contains all tests (functions which test)
there is a file with Describe function in each
in that file there is one "it" which call the function which contains all tests
there is test suite for each project
I believe that there is a practice how to organize everything in a right way, that each test was in own "it". So will be happy to see some links or advice.
Thank you in advance!
Since it's a broad question, i will redirect you to few links. You should probably be looking at page-object model of Protractor. It will help you simplify and set a standard to organise your tests in a way that is readable and easy to use. Here's the link to it as described by Protractor team.
page-object model
However if you want to know why do we need to use such a framework, there are many shortcomings to it, which can be solved by using such framework. A detailed explanation is here
shortcomings of protractor and how to overcome them
EDIT: Based on your comments i feel that you are trying make a unified file/function that can cater to all the suites that will be using it. In order to handle such things try adding a generalised function (to fill form fields in your case), export that function and then require it into your test suites. Here's a sample link to it -
Exports and require
Hope this helps.

Multiple feature inside single feature file

My current Cucumber file looks like this:
Feature: Test Online application Page
Scenario: Visit application home page and test links
Scenario: Visit application Login and Validate login
So now I would like to add few more scenarios may be for API testing in the same file. So i was thinking to create a new Feature for that instead of using the Feature: Test Online application Page. This way i dont need to create a separate feature file for API testing.
Feature: Test Online application Page
Scenario: Visit application home page and test links
Scenario: Visit application Login and Validate login
Feature: Test application API's
Scenario: validate Login API
Is it possible to have multiple features within a single feature file and is that a good practice? I just need to test one API and I will run API tests along with online tests. I will still separate them using #online and #api tags.
It is not possible to have multiple feature inside single feature file. If you create multiple feature inside single feature file, you will get Gherkin Parser exception while running cucumber scenarios. So the answer is NO.
C:/Users/ABC/RubymineProjects/XYZ.feature: Lexing error on line 47: 'Feature test google'. See https://github.com/cucumber-attic/gherkin2/wiki/LexingError for more information. (Cucumber::Core::Gherkin::ParseError)
Well, it is obviously not a good practice. It is best to put a single feature in a feature file. You should create new feature files for doing this. But you can add any number of scenarios in a single feature file.
The corresponding steps may or may not be in a single step file.
In BDD, cucumber is designed for the non-technical audience as well.
Writing scenario and steps definition in Gherkin Language or simple English is must support other audience.
All scenario should be executed Independently. No dependency on other scenario or feature file
In my past experience, Adding more complexity will add more flaky tests and High Maintenance cost
Agree with #philip John
You can create a text file with the features
and when executing the file it adds all the files in the order defined in the file.
#order-execution.txt
In this example, the file was created in the project root. file content
./features/records/country.feature
./features/records/company.feature
"scripts": {
"test:company": "cucumber-js #order-execution.txt --tags \"#company\" -f json:result/records/company.json",
},
this is the same as riding this way
"scripts": {
"company": "cucumber-js ./features/records/country.feature ./features/records/company.feature --tags \"#company\" -f json:result/records/company.json",
},

Meteor test driven development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I don't see how to do test driven development in meteor.
I don't see it mentioned anywhere in documentation or FAQ. I don't see any examples or anything like that.
I see that some packages are using Tinytest.
I would need response from developers, what is roadmap regarding this. Something along the lines of:
possible, no documentation, figure it out yourself
meteor is not built in a way that you can make testable apps
this is planned feature
etc
Update 3: As of Meteor 1.3, meteor includes a testing guide with step-by-step instructions for unit, integration, acceptance, and load testing.
Update 2: As of November 9th, 2015, Velocity is no longer maintained. Xolv.io is focusing their efforts on Chimp, and the Meteor Development Group must choose an official testing framework.
Update: Velocity is Meteor's official testing solution as of 0.8.1.
Not much has been written about automated testing with Meteor at this time. I expect the Meteor community to evolve testing best-practices before establishing anything in the official documentation. After all, Meteor reached 0.5 this week, and things are still changing rapidly.
The good news: you can use Node.js testing tools with Meteor.
For my Meteor project, I run my unit tests with Mocha using Chai for assertions. If you don't need Chai's full feature set, I recommend using should.js instead. I only have unit tests at the moment, though you can write integration tests with Mocha as well.
Be sure to place your tests in the "tests" folder so that Meteor does not attempt to execute your tests.
Mocha supports CoffeeScript, my choice of scripting language for Meteor projects. Here's a sample Cakefile with tasks for running your Mocha tests. If you are using JS with Meteor, feel free to adapt the commands for a Makefile.
Your Meteor models will need a slight bit of modification to expose themselves to Mocha, and this requires some knowledge of how Node.js works. Think of each Node.js file as being executed within its own scope. Meteor automatically exposes objects in different files to one another, but ordinary Node applications—like Mocha—do not do this. To make our models testable by Mocha, export each Meteor model with the following CoffeeScript pattern:
# Export our class to Node.js when running
# other modules, e.g. our Mocha tests
#
# Place this at the bottom of our Model.coffee
# file after our Model class has been defined.
exports.Model = Model unless Meteor?
...and at the top of your Mocha test, import the model you wish to test:
# Need to use Coffeescript's destructuring to reference
# the object bound in the returned scope
# http://coffeescript.org/#destructuring
{Model} = require '../path/to/model'
With that, you can start writing and running unit tests with your Meteor project!
Hi all checkout laika - the whole new testing framework for meteor
http://arunoda.github.io/laika/
You can test both the server and client at once.
See some laika example here
See here for features
See concept behind laika
See Github Repository
Disclaimer: I'm the author of Laika.
I realize that this question is already answered, but I think this could use some more context, in the form of an additional answer providing said context.
I've been doing some app development with meteor, as well as package development, both by implementing a package for meteor core, as well as for atmosphere.
It sounds like your question might be actually a question in three parts:
How does one run the entire meteor test suite?
How does one write and run tests for individual smart packages?
How does one write and run tests for his own application?
And, it also sounds like there may be a bonus question in there somewhere:
4. How can one implement continuous integration for 1, 2, and 3?
I have been talking and begun collaborating with Naomi Seyfer (#sixolet) on the meteor core team to help get definitive answers to all of these questions into the documentation.
I had submitted an initial pull request addressing 1 and 2 to meteor core: https://github.com/meteor/meteor/pull/573.
I had also recently answered this question:
How do you run the meteor tests?
I think that #Blackcoat has definitively answered 3, above.
As for the bonus, 4, I would suggest using circleci.com at least to do continuous integration for your own apps. They currently support the use case that #Blackcoat had described. I have a project in which I've successfully gotten tests written in coffeescript to run unit tests with mocha, pretty much as #Blackcoat had described.
For continuous integration on meteor core, and smart packages, Naomi Seyfer and I are chatting with the founder of circleci to see if we can get something awesome implemented in the near term.
RTD has now been deprecated and replaced by Velocity, which is the official testing framework for Meteor 1.0. Documentation is still relatively new as Velocity is under heavy development. You can find some more information on the Velocity Github repo, the Velocity Homepage and The Meteor Testing Manual (paid content)
Disclaimer: I'm one of the core team members of Velocity and the author of the book.
Check out RTD, a full testing framework for Meteor here rtd.xolv.io.
It supports Jasmine/Mocha/custom and works with both plain JS and coffee. It includes test coverage too that combines unit/server/client coverage.
And an example project here
A blog to explain unit testing with Meteor here
An e2e acceptance testing approach using Selenium WebdriverJS and Meteor here
Hope that helps. Disclaimer: I am the author of RTD.
I used this page a lot and tried all of the answers, but from my beginner's starting point, I found them quite confusing. Once I had any trouble, I was flummoxed as to how to fix them.
This solution is really simple to get started with, if not fully documented yet, so I recommend it for people like myself who want to do TDD but aren't sure how testing in JavaScript works and which libraries plug into what:
https://github.com/mad-eye/meteor-mocha-web
FYI, I found that I also need to use the router Atmosphere package to make a '/tests' route to run and display the results from the tests, as I didn't want it to clutter my app every time it loads.
About the usage of tinytest, you may want to take a look at those useful ressources:
The basics are explained in this screencast:
https://www.eventedmind.com/feed/meteor-testing-packages-with-tinytest
Once you understood the idea, you'll want the public API documentation for tinytest. For now, the only documentation for that is at the end of the source of the tinytest package: https://github.com/meteor/meteor/tree/devel/packages/tinytest
Also, the screencast talks about test-helpers, you may want to have a look at all the available helpers in here:
https://github.com/meteor/meteor/tree/devel/packages/test-helpers
There often is some documentation inside each file
Digging in the existing tests of meteor's packages will provide a lot of examples. One way of doing this is to make a search for Tinytest. or test. in the package directory of meteor's source code
Testing becomes a core part of Meteor in the upcoming 1.3 release. The initial solution is based on Mocha and Chai.
The original discussions of the minimum viable design can be found here and the details of the first implementation can be found here.
MDG have produced the initial bones of the guide documentation for the testing which can be found here, and there are some example tests here.
This is an example of a publication test from the link above:
it('sends all todos for a public list when logged in', (done) => {
const collector = new PublicationCollector({userId});
collector.collect('Todos.inList', publicList._id, (collections) => {
chai.assert.equal(collections.Todos.length, 3);
done();
});
});
I'm doing functional/integration tests with Meteor + Mocha in the browser. I have something along the lines of the following (in coffeescript for better readability):
On the client...
Meteor.startup ->
Meteor.call 'shouldTest', (err, shouldTest) ->
if err? then throw err
if shouldTest then runTests()
# Dynamically load and run mocha. I factored this out in a separate method so
# that I can (re-)run the tests from the console whenever I like.
# NB: This assumes that you have your mocha/chai scripts in .../public/mocha.
# You can point to a CDN, too.
runTests = ->
$('head').append('<link href="/mocha/mocha.css" rel="stylesheet" />')
$.getScript '/mocha/mocha.js', ->
$.getScript '/mocha/chai.js', ->
$('body').append('<div id="mocha"> </div>')
chai.should() # ... or assert or explain ...
mocha.setup 'bdd'
loadSpecs() # This function contains your actual describe(), etc. calls.
mocha.run()
...and on the server:
Meteor.methods 'shouldTest': -> true unless Meteor.settings.noTests # ... or whatever.
Of course you can do your client-side unit testing in the same way. For integration testing it's nice to have all Meteor infrastructure around, though.
As Blackcout said, Velocity is the official TDD framework for Meteor. But at this moment velocity's webpage doesn't offer good documentation. So I recommend you to watch:
Concept behind velocity
Step by step tutorial
And specially the Official examples
Another option, made easily available since 0.6.0, is to run your entire app out of local smart packages, with a bare minimum amount of code outside of packages to boot your app (possibly invoking a particular smart package that is the foundation of your app).
You can then leverage Meteor's Tinytest, which is great for testing Meteor apps.
Ive successfully been using xolvio:cucumber and velocity to do my testing. Works really well and runs continuously so you can always see that your tests are passing.
Meteor + TheIntern
Somehow I managed to test Meteor application with TheIntern.js.
Though it is as per my need. But still I think it may lead someone to the right direction and I am sharing what I have done to resolve this issue.
There is a execute function which allows us to run JS code thorugh which we can access browsers window object and hence Meteor also.
Want to know more about execute
This is how my test suite looks for Functional Testing
define(function (require) {
var registerSuite = require('intern!object');
var assert = require('intern/chai!assert');
registerSuite({
name: 'index',
'greeting form': function () {
var rem = this.remote;
return this.remote
.get(require.toUrl('localhost:3000'))
.setFindTimeout(5000)
.execute(function() {
console.log("browser window object", window)
return Products.find({}).fetch().length
})
.then(function (text) {
console.log(text)
assert.strictEqual(text, 2,
'Yes I can access Meteor and its Collections');
});
}
});
});
To know more, this is my gist
Note: I am still in very early phase with this solution. I don't know whether I can do complex testing with this or not. But i am pretty much confident about it.
Velocity is not mature yet. I am facing setTimeout issues to use velocity. For server side unit testing you can use this package.
It is faster than velocity. Velocity requires a huge time when I test any spec with a login. With Jasmine code we can test any server side method and publication.

Resources