Trouble getting Jasmine to test routing (iron:router related) - jasmine

I've just started trying to use velocity testing for Meteor. I've hit a roadblock trying to put my routes under test. I can't get this to work with Jasmine or Mocha:
(With only the packages iron:router, velocity:html-reporter, coffeescript and sanjo-jasmine added to a default meteor app.)
In /tests/jasmine/client/integration/router-test.coffee:
describe "Route", ->
describe "non-existing", ->
it "should not run green", ->
Router.go "foo"
expect(Router.current().url).toEqual("/foo")
describe "existing", ->
it "should run green", ->
Router.go "bar"
expect(Router.current().url).toEqual("/bar")
In /client/router.coffee:
Router.route "bar"
And in the default html file:
<template name="bar">
<p>Yeah.</p>
</template>
If I run Router.go("bar") in the JS console, it works fine as expected: Router.current().url outputs /bar. However, in the reporter I get the following error:
Expected 'http://localhost:64927/?jasmine=true' to equal '/bar'.
Which implies that the router does indeed find the route "bar", but the navigation seems to not run in the same way. Even stranger, when I navigate to http://localhost:64927/?jasmine=true, my browser jumps to http://localhost:64927/bar by some magic means.
Any ideas?
Also, I've noticed that sometimes tests run green, despite there being an uncaught exception on the JS console. Since these errors tend to just break the execution of the test function and no assertions are therefor processed, this is a really dangerous thing to have in a testing framework. Any idea how to counter this?

Actually there is a helper you must add and it is described in integrating Velocity / Jasmine with iron router here in the Velocity docs:
https://velocity.readme.io/docs/jasmine-integration-tests-with-iron-router

My guess is you may be hitting an async issue. Router.go is not instant, and you're checking too quickly right after to see if the current route has been switched.
Also, the jasmine=true is the URL for the mirror that gets hit to start the mirror testing, so when you get current route (too quickly) it's likely that will be returned. When you go to the mirror manually, it looks like the test is running again (because you have jasmine=true), which in turn is running Router.go
I can't tell exactly what you're trying to test, but you could add a hook to do your assertion in an Router.onAfterAction. Don't forget to also use Jasmine's done() method since you'll be doing an Async test.
As for the tests running green on exceptions, that would be a bug! If you could create a reproducible repo and post a bug on the framework's issues on GitHub, it will help get that resolved.
Hope that helps!

Related

Why does running Cypress on a React/GraphQL app return network errors when normal browsing doesn't?

I have a React/apollo client and an apollo/neo4j backend application based on GRANDStack.
My React app runs on localhost:3000 and my GraphQL on localhost:4001/graphql, and they communicate without fail. All is working well in the app (with CORS enabled), but I wanted to implement testing with Cypress.
Should I expect Cypress to be able to observe the flow between React and GraphQL without error? Or is this beyond its capability?
What I've tried:
I set up Cypress and ran the following test:
it("Opens myPlan.", function() {
cy.visit("localhost:3000/myPlan");
cy.wait(6000);
});
At first cypress setup, my site loaded. One of the first things that the app does is graphql query a few values, and create a dropdown box based on those values. While this and all other graphql requests work fine in the browser, I get "{"graphQLErrors":[],"networkError":{"name":"ServerParseError","response":{},"statusCode":404,"bodyText":""},"message":"Network error: Unexpected end of JSON input"}" errors in cypress for the same code.
Presumably, the problem was because there are 2 endpoints, and cy.visit only allows one. I tried disabled ChromeWebSecurity and tried an "Access-Control-Allow-Origin-master" plugin.
Edit: I found someone that knew Cypress, and they suggested adding:
"proxy": "http://localhost:4001/",
to my react client config. This avoids the multi-port issues, and Cypress works.
Edit: I found someone that knew Cypress, and they suggested adding:
"proxy": "http://localhost:4001/",
to my react client config. This avoids the multi-port issues, and Cypress works.

cy.visit crashes without meaningfull messages

I'm new to cypress and I want to create tests for a rather oldfashioned website which is unfortunatelly not available on the internet.
When I perform a simple call to the website cypress crashes:
cy.visit('http://10.0.0.6:8080/dmc')
The message I get within the chromium instance is: "Whoops, there is no test to run. Choose a test to run from the desktop application."
Shortly before I get this message, I see some messages in red in the left chromium task-runner-pannel (sorry for not knowing the correct term). But the timespan is too short to read it.
So my questions are, any ideas why cypress is crashing? I know this may gonna be a tough one since the website is not accessable.
Can I find these "red-messages" somewhere else? eg a logfile?
BR,
Richard

"New version available" with service worker and sw-precache

I'm trying to use sw-precache, but I must be doing something wrong!
I'm mostly using the demo code available from the github repo and can't seem to get updates to the app to come through. Once it's cached the first time, it never checks for new versions.
I was expecting that when I publish a new service worker, the browser would request the new service worker and update the cache accordingly in the background. Then using the registration code in the example, I would be able to prompt the user to refresh and get the latest version from their newly refreshed cache.
Would really appreciate if someone could please point me in the right direction.
Example
To demonstrate the problem, I've created an isolated example here:
https://github.com/stevenocchipinti/sw-precache-demo
The example uses a basic skeleton from create-react-app which has a built in build task which take care of fingerprinting the filenames, etc.
I suspect the problem is with me caching everything by using the following sw-precache config:
{
"staticFileGlobs": [ "build/**/*.*" ],
"stripPrefix": "build/"
}
There are more accurate steps in the repo's readme, but the basic steps I'm taking to reproduce the problem are as follows (with my probably incorrect expectations).
Steps and Assumptions
Browse to the app for the first
I should see Content is now available offline! in the console
Reload the page
The message in the console should not appear again because the service worker is installed, but the page should still work.
Go offline and reload the page
The page should still work
Make a visible change to the source code
Rebuild (run the build task and sw-precache)
This is where my understanding must be wrong
Reload the page
The service worker should update the cache in the background
When its done, you should see New or updated content is available. in the console
The actual visible changes should not be visible until the next reload
Reload the page again
The browser will use the new cache this time around
The changes should be visible now!
There shouldn't be any messages in the console
The problem
Once the app has been cached initially, it will never update unless you unregister the service worker or force a reload.
I'm not sure how to make this work - any help would be greatly appreciated!
After replicating your development hosting environment, I can see that you're serving your service-worker.js file with a browser HTTP cache lifetime of one hour:
There's more information as to why this is leading to the behavior you're seeing, along with best practices, in this previous answer. As mentioned at the top of that answer, browsers plan on changing their behavior to stop honoring the HTTP cache for the service worker file by default, mainly due to the type of confusion that you're experiencing here. For the time being, though, the production versions of both Chrome and Firefox continue to honor those headers.

How can I 'Do something' if protractor test case fails?

I have a test suite of around 1000 test cases, but sometimes if one test case fails due to some unclosed popup window, all of the subsequent test cases will also fail because the popup modal will not allow protractor to interact with page elements.(My app is that way)
So I want to create some condition such as i will refresh the page if a test case fails or I will go to my homepage link if test case fails as all tc's start from same starting point.
This will prevent all my subsequent test cases from failing. This method was called recovery scenario in QTP/UFT days.
I was also facing these kind of issue that if one test case fails subsequent all test cases will fail,Not sure if there is any recovery scenarios available in protractor but I am using before all, after all, before each and after each to start my each test from a clean state. I have added a helper function to navigate to Home page and I call every time this function from before each.
this is helpful link

What does "HTTP request failed" mean in an rspec/capybara/poltergeist/phantomjs spec?

Even when all tests are passing, I see many many instances of this message amid the successful test output:
...
in the single-post view
behaves like editing a comment
HTTP request failed.
HTTP request failed.
HTTP request failed.
...
What is causing it?
One possibility is that requests made by for example third-party analytics scripts on your page are failing.
You can see their activity by inspecting the output of poltergeist's page.driver.network_traffic at the end of a test.
If you think this is the problem, you could take those scripts out of the picture by
including them in the page only if you're not running tests, or
using poltergeist's page.execute_script to replace appropriate functions in those third-party scripts with no-op functions. (That takes more work but leaves the page contents more production-like, which might catch a few more possible errors.)

Resources