Meteor with QUnit - qunit

I am trying to use QUnit with a Meteor app. Should this be possible? Any recommended patterns?
I was trying to make an app that was "self testing" by making a route for "/test" but it doesn't appear that QUnit is running my tests (no test output appears).

#Tom, sure here ya go:
I've added a package for qunit with meteor here:
https://github.com/jpmec/meteor/commit/786b93153d94c0e2291ac210f64587dbbbad23d6
Some facts and disclaimers:
I didn't to the branch right, I branched from master not devel.
I'm not spending much time trying to keep my meteor branch up to date.
This meteor branch is truely fubar'ed wrt the main meteor project, so don't branch from it.
Your best bet is to download, and go look in the packages folder for qunit. That part I think I did right. You'll probably just want to drop this in your meteor packages folder and see if it helps you.
After trying it out some, here are my thoughts to other would-be qunit with meteor users:
I cannot figure out how to easily have a "test site" and "production site" with meteor. It seems like it is all or nothing out of the box, so you can have a self-testing site, but all users get to run the tests. (What I would like is to serve up one site on one port and another site on another port, while maintaining a consistent folder tree for my "app").
The hot-push of meteor is really cool with qunit. As you write your tests you see them go from red to green in semi-real time. No need to keep switching to the test page and refreshing. This is by far the coolest part of meteor, and using qunit with meteor.

The answer to this question was a little more involved for me.
I found no discernible difference between putting qunit in a package, and just including qunit sources in my /client files. My difficulty was that sometimes tests seemed to run, sometimes not at all, and frequently a mysterious "global error" would appear in my test results.
This was called by qunit attempting to automatically launch the test run before my own code had loaded tests. I found no good solution to prevent the automatic behavior. My eventual solution was to let qunit finish its (empty) automatic test run, and then to call Qunit.init(), load tests, then Qunit.start().

Related

Cypress 10 - How to run all tests in one go?

I used to use Cypress 9 on previous projects.
By default, when running cypress open or cypress open --browser chrome used to run all tests for all React components.
However I installed Cypress 10 for the first time on a project that didn't have e2e tests yet. I added test specs, but I don't see any option to run all tests altogether.
It seems I have to run the tests one by one, clicking on each of them.
Can anyone please suggest how do I run all the tests automatically?
It's been removed in Cypress v10, here are the change notes related
During cypress open, the ability to "Run all specs" and "Run filtered specs" has been removed. Please leave feedback around the removal of this feature here. Your feedback will help us make product decisions around the future of this feature.
The feedback page to register your displeasure is here
You can create a "barrel" spec to run multiple imported specs.
I can't vouch for it working the same as v9 "Run all tests", but can't see any reason why not.
// all.spec.cy.js
import './test1.spec.cy.js' // relative paths
import './test2.spec.cy.js'
...
As #Constance says, reinstated in v11.20.
But still a very handy technique if you want to run a pre-defined subset of your tests.
In Cypress version 11.2.0 the Run All button has been reinstated.
You need to set experimentalRunAllSpecs to true in cypress.config.js.
Please see Configuration - End-to-End Testing
If Cypress Test Runner is not a must, I suggest to utilize the CLI/Node Cmd approach
You can trigger all the test(s) by npx cypress run(Still the video recording & screenshot on failed steps would be saved in the respective folders) to run all or with any other cypress flags to filter out specific spec files, or browser etc.
As per the feedback discussion there is a workaround the same as #Fody's answer that will achieve the same result as v9. Also worth noting though is the section on Continuous Integration and the Update 1 that includes a fix for preventing this workaround creating issues with the cypress run command.
Are there any current workarounds?
Yes. If you are impacted by the omission of this feature, it is possible to achieve the same level of parity as 9.x with a workaround Gleb Bahmutov explains here: https://glebbahmutov.com/blog/run-all-specs-cypress-v10/
This will still inherit the same problems as the previous implementation (which is why it was removed) but it will work in certain cases where the previous implementation was not problematic for your use case
https://github.com/cypress-io/cypress/discussions/21628#discussion-4098510
It was removed because people used it wrong.
The Test Runner is for debugging single tests. But by running all tests, then performance will quickly become a problem and crash the entire suite.
Running all tests should only be performed from the CLI.
Sources
https://github.com/cypress-io/cypress/issues/681
https://github.com/cypress-io/cypress/discussions/21628

Logic tests for OS X using Xcode 6

I want to implement unit testing in my Xcode project, and would like to run tests without requiring the application to be started.
Reasons for this are, I have a core data based document app, that also uses a cvdisplay link to control continuous rendering in a background thread.
It strikes me that I do not need a running application to test core data datamodel functionality, this should be distinct from view stuff anyway. Also I would like to isolate and performance test my background rendering processes, something that seems very difficult with the app running, but could easily do without the application running, just getting the right classes and feed it the correct data.
I've seen other questions that have answers for Xcode versions before six, but the answers don't seem to work for the current version.
The docs now make a distinction between application and library tests. Library tests are run against library targets.
I'm not sure i want to reorganise my code into distinct libraries at the moment, and would prefer to avoid it or fake it somehow.
I saw somewhere an openradar relating to this in ios, but I'm interested in osx.
Has anyone any insight into this?
EDIT : Learning to cope with the existing setup for now, testing with full app running, I can run some checks on that, then I close all documents and shut down the display link.
I can then run tests creating my own persistent store coordinator, in memory datastore and context, as well as testing my rendering classes without fear of conflict with the other display thread.
I'm now running into troubles with linking sources, I just can't seem to get it right, I fiddle with settings, it seems to work for a bit, then suddenly stops building again with Undefined symbols for architecture x86_64: errors, either that or problems linking with 3rd party private frameworks. I look through the web, change a few things, it starts working again. Then I add some tests, importing more of my classes, things stop working again.,.. Infuriating
EDIT 2: Pretty much all sorted now, but maybe not terribly efficient. For each test case class, I either open or close documents and start or stop the display link in the +(void)setup method. I don't do anything in the +(void)tearDown, and let the setup decide how to proceed based on the current state.
Although this means it's possible to flow from one test class to another minimizing document opens and closes, there doesn't seem to be a way to order the tests so that I could group them together.
BTW, I also solved my mentioned linking troubles (XCode 6 Testing Target Troubles), not really relevant to this question though.
It sounds like you landed on the standard solution: Give your app a way to tell when it's being stood up for testing rather than use, and then have applicationDidFinishLaunching: not do any of your usual launch-time behaviors, but leave it to specific tests to provide any setup they need.
You might benefit from creating multiple test suites to deal with different expected conditions, like all the tests that work around a specific document being open.

Sharing Specflow Feature Files with Multiple Applications

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

How can I debug node modules?

I am looking into getting involved with the rather exciting node.js, but I'm trying to find ways to replace my current development environment.
At present, my best friend in PHPLand is FirePHP module for Firebug, which is a god send as far as debugging your PHP goes. What methods do I have at my disposal for debugging node code? Does it output errors and warnings like PHP can be set to?
There are multiple ways to debug node.
node debug app.js
node-inspector
console
util
Personally I use a lot of well place console.log, console.dir and util.inspect statements throughout my code and follow the logic that way.
Of course unit tests come hand in hand with debugging. Write tests that assert your code work. The unit tests will cover catching most of the bugs.
You must write unit tests for your node.js code. nodeunit is great for general testing.
If your using express as your web engine then use expresso

Am I allowed to check in a failing test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Our team is having a heated debate as to whether we allow failing unit tests to be checked-in to source control.
On one side the argument is that yes you can as long as it is temporary - to be resolved within the current sprint. Some say even that in the case of bugs that may not be corrected within the current sprint we can check-in a corresponding failing test.
The other side of the argument is that those tests, if they are checked-in must be marked with the Ignore attribute - the reasoning being that the nightly build should not serve as a TODO list for a developer.
The problem with Ignore attribute however is that we tend to forget about the tests.
Does the community have any advice for us ?
We are a team of 8 developers, with a nightly build. Personally I am trying to practice TDD but the team tends to write unit tests after the code is written
I'd say not only should you not check in new failing tests, you should disable your "10 or so long-term failing tests". Better to fix them, of course, but between having a failing build every night and having every included test passing (with some excluded) - you're better off green. As things stand now, when you change something that causes a new failure in your existing suite of tests, you're pretty likely to miss it.
Disable the failing tests and enter a ticket for each of them; that way you'll get to it. You'll feel a lot better about your automated build system, too.
I discussed this with a friend and our first comment was a recent geek&poke :) (attached)
To be more specific - all tests should be written before (sa long as it's supposed to be TDD) but those checking an unimplemented functionality should have their value prepended with negation. If it's not implemented - it shouldn't work. [If it works - the test is bad] After implementing a test you remove the ! and it works [or fails, but then it's there to do so :) ].
You shouldn't think that tests are something written once and always right. Tests can have bugs too. So editing a test should be normal.
I'd say that checking in (known) failing tests should of course be only temporary, if at all. If the build is always failing, it loses its meaning (we've been there and that's not pretty).
I guess it would be ok to check in failing tests if you found a bug and could reproduce it quickly with a test, but the offending code is not "yours" and you don't have the time/responsibility to get into it enough to fix the code. Then give a ticket to someone who knows his way around and point to the test.
Otherwise I'd say use your ticket system as a TODO list, not the build.
It depends how you use tests. In my group, running tests is what you do before a commit in order to check that you (likely) have not broken anything.
When you are about to commit, it is painful to find failed tests that seem vaguely possibly related to your changes but still strange, investigate for a couple of hours, then realize it cannot possibly be because of your changes, do a clean checkout, compile, and find that indeed the test failures come from the trunk.
Obviously you do not use tests in the same way, otherwise you wouldn't even be asking.
If you were using a DVCS (e.g., git) this wouldn't be an issue as you'd commit the failing test to your local repository, make it work, and then push the whole lot (test and fix) back to the team's master repository. Job done, everyone happy.
As it seems you can't do that, the best you can do is to first make sure that the test is written and fails in your sandbox, and then fix that before committing. This might or might not be great TDD, but it's a reasonable compromise with the working practices of everyone else; working with your co-workers is more important than following some ivory tower principle in every aspect, since the author of the principle isn't located in the cubicle next door…
The purpose of your nightly build is to give you confidence that the code written the day before hasn't broken anything. If tests are often failing then you can't have that confidence.
You should first fix any failing tests you can and delete or comment out/ignore the other failing ones. Every nightly build should be green. If its not then there is a problem and that's immediately obvious now since it should have run green.
Secondly, you should never check in failing tests. You should never knowingly break the build. Ever. It causes unnecessary noise, distractions and lowers confidence. It also creates an atmosphere of laziness around quality and discipline. With respect to ignored tests which are checked in, these should be caught and addressed in your code reviews, which should be covering you test code as well.
If people want to write their code first and tests later, this is OK (though I prefer TDD), but only tested code which runs green should be checked in.
Finally, I would recommend changing the nightly build to a continuous integration build (run on each code check in) which might just change people's habits around code check-in.
I can see that you have a number of problems.
1) You are writing failing tests.
This is great! However, someone is intending to check those in "to be fixed within the current sprint". This tells me that it's taking too long to make those unit tests pass. Either the tests are covering more than one or two aspects of behaviour, or your underlying code is far too complex. Refactor the code to break it up and use mocks to separate the responsibilities of the class under test from its collaborators.
2) You tend to forget about tests with [Ignore] attributes.
If you're delivering code that people care about, either it works, or it has bugs which require the behaviour of the system to be changed. If your unit tests describe that behaviour but the behaviour doesn't work yet, then you won't forget because someone will have registered a bug. However, see point 1).
3) Your team is writing tests after the code is written.
This is fairly common for teams learning TDD. They might find it easier if they thought of the tests not as tests to check if their code is broken, but examples of how another developer might want to use their code, with a description of the value that their code provides. Perhaps you could pair with them and help them learn from what they already know about writing tests afterwards?
4) You're trying to practice TDD.
Do or do not. There is no try. Write a test first, or don't. Learning never stops even when you think you're doing TDD well.

Resources