What is the difference between Zombie.js and Jasmine? - jasmine

May I know what is the difference between Zombie.js and Jasmine? are they both frameworks?

Jasmine is a unit test framework for BDD (behavior driven development). It requires an execution environment like NodeJs or a browser like Firefox, Chrome, IE, PhantomJS etc. to run (and to provide the environment for the code under test). Jasmine provides the test execution and assertion infrastructure (that's the describe(), it(), expect()).
Zombie.js is an emulated, headless browser. It's a browser on its own plus an interaction API for itself. It's like Selenium/Webdriver. It's using jsdom under its hood to provide the APIs browsers usually provide. Zombie.js requires a test execution and assertion infrastructure (like Mocha + should.js or even Jasmine).
With Jasmine you write tests on a module or group-of-modules level. But usually not on an application level
With Zombie.js you interact with a website (served by a server) through an interaction API.
With Jasmine you make fine grained assertions on the output or events created for certain input - on the module level.
With Zombie.js you interact with the whole application (or website).
With Jasmine you test only the Javascript part.
With Zombie.js you test the the frontent + backend. Though you might be able to mock away and intercept server interaction (maybe, I'm not familar with it).
With Jasmine you call a method/function, pass a parameter and test the return value and events
With Zombie.js you load a page and fill a form and test the output
With Jasmine you need to run the tests in the proper execution envrionment (like Firefox, Chrome, ...)
With Zombie.js you pages runs in a new execution environment
With Jasmine you can test in browsers (consumers use) with their typical quirks
With Zombie.js you test you application in a new browser with new quirks
Jasmine example:
// spy on other module to know "method" was called on it
spyOn(otherModule, "method");
// create module
let module = new Module(otherModule),
returnValue;
// calls otherModule.method() with the passed value too; always returns 42
returnValue = module(31415);
// assert result and interaction with other modules
expect(returnValue).toBe(42);
expect(otherModule.method).toHaveBeenCalledWith(31415);
Zombie.js example:
// create browser
const browser = new Browser();
// load page by url
browser.visit('/signup', function() {
browser
// enter form data by name/CSS selectors
.fill('email', 'zombie#underworld.dead')
.fill('password', 'eat-the-living')
// interact, press a button
.pressButton('Sign Me Up!', done);
});
// actual test for output data
browser.assert.text('title', 'Welcome To Brains Depot');
Zombie.js, like Webdriver/Selenium, is no replacement for a unit testing framework like Jasmine, Mocha.

Related

How to get WebStorm to run ESLint ruletester efficiently

I am writing an ESLint plugin and I have been testing with eslint.RuleTester. This is okay, I know how to configure their test options argument but I always have to run the entire test file.
here is an example test file:
const {RuleTester} = require('eslint');
const rulester = new RuleTester({setup});
const rule = require ('myrule');
//this works also but i have to run the entire file (and thus all the tests)
ruleTester.run('myruletest',rule,{invalid,valid});
Normally, when I install a test runner I get a run/configuration for it and handy play⏯ and debug🐞 buttons in line with each test. As I write more tests (particularly in the same file) it would be handy to quickly click a => beside a test and run just that single test.
If I try to call ruletester.run from a mocha.it callback it will not report the test correctly (and definitely cannot debug / step into it).
e.g. this does not work well
const mocha = require('mocha');
const {RuleTester} = require('eslint');
const rulester = new RuleTester({setup});
const rule = require ('myrule');
// nice play button and custom run configruation but not honest test feeback
it('mytest', ()=>{
// it'll run this but will not report correctly -- `it` says it always passes
ruleTester.run('myruletest, rule, {invalid,valid});
});
it('mytest', async ()=>{
// async is of no help
await ruleTester.run('myruletest, rule, {invalid,valid});
});
//this still works also but then i have to run the entire file (and thus all the tests)
ruleTester.run('myruletest',rule,{invalid,valid});
So how do I tell WebStorm to either
recognize eslint.RuleTester as a testrunner
properly call and instance of RuleTester from my own testrunner?
Recognizing eslint.RuleTester as a test runner would require developing a special plugin.
See http://www.jetbrains.org/intellij/sdk/docs/ for basic documentation on plugin development. You can use existing plugins as example: Mocha runner is not fully open source, but its JavaScript part is open source (https://github.com/JetBrains/mocha-intellij); also, there is an open source plugin for Karma https://github.com/JetBrains/intellij-plugins/tree/master/js-karma

I'd like to use Async/await and onPrepare to make sure my tests don't start until onPrepare is complete

I want onPrepare to finish before any tests are run and I'm using async / await.
I'm new to javascript and protractor but I've been writing test automation for a couple of decades. This seems like an incredibly simple thing to want to do, have onPrepare finish before a test starts, but I'm confused my everything I've seen out there.
I've set SELENIUM_PROMISE_MANAGER: false so I don't want to use promises to do this, right?
My landing page in anguler
do I use async and await on onPrepare or browser.driver.wait or webdriver.until.elementLocated? If so, do I put 'await' before those waits? (That seems redundant)
onPrepare: async() => {
await browser.driver.get( 'https://localhost:8443/support-tool/#/service/config');
await browser.driver.findElements(by.className('mat-table'));
browser.driver.wait(webdriver.until.elementLocated(by.css('mat-select-value')), 10000);//(Returns webdriver not defined)
},
first, I get webdriver not defined when I run it. Once I get it to work, will my tests wait for onPrepare to be completed before they start running?
So Protractor is a wrapper for the webdriverJS and webdriverJS is completely asynchronous. To give a very high level definition, Protractor wraps webdriverJS up so that every action returns a promise (for instance .get(), .sendKeys(), .findElement()).
Previously webdriverJS had what is referred to as the 'control flow' which allowed users to write code as they would in any synchronous programming language and the fact the almost everything is a promise was handled behind the scenes. This feature has been deprecated in the latest versions and the main reason is that the introduction of the async/await style of handling promises makes it much easier for users to manage promises ourselves.
If you are using protractor 6.0+ the control flow is disabled by default but it will be disabled for you regardless as you have you have set SELENIUM_PROMISE_MANAGER: false. You will need to manually manage your promises, which you are doing, by using async/await.
browser.driver vs browser
I also want to point out the by using browser.driver.get you are referring to the underlying selenium instance and not the protractor wrapper instance therefore it will not wait for the angular page to stabilize before interacting (I could be mistaken on this). There is more info on the distinction in this thread.
Any action that involves the browser or the file system will likely be a promise so include the await before them and any function that contains an await needs to be declared async.
I would write your code as follows:
onPrepare: async() => {
await browser.get('https://localhost:8443/support-tool/#/service/config');
let someElement = await element(by.css('mat-select-value'));
await browser.wait(protractor.ExpectedConditions.presenceOf(someElement), 10000);
},
Finally, as long is your onPrepare is using awaits properly it should for sure complete before your tests begin.
Hope that helps and is clear, it was longer than I anticipated.

How does React deal with pre-compiled HTML from PhantomJS?

I compiled my reactjs using webpack and got a bundle file bundles.js. My bundles.js contains a component that make API calls to get the data.
I put this file in my html and pass the url to phantom.js to pre-compile static html for SEO reasons.
I am witnessing something strange here, the ajax calls for APIS are not getting fired at all.
For example, I have a component called Home which is called when I request for url /home. My Home component makes an ajax request to backend (django-rest) to get some data. Now when I call home page in phantomjs this api call is not getting fired.
Am I missing something here?
I have been using React based app rendering in Phantomjs since 2014. Make sure you use the latest Phantomjs version v2.x. The problems with Phantomjs occur because it uses older webkit engine, so if you have some CSS3 features used make sure they are prefixed correctly example flexbox layout.
From the JS side the PhantomJS does not support many newer APIs (example fetch etc.), to fix this add the polyfills and your fine. The most complicated thing is to track down errors, use the console.log and evaluate code inside the Phantomjs. There is also debugging mode which is actually quite difficult to use, but this could help you track down complex errors. I used webkit engine based browser Aurora to track down some of the issues.
For debugging the network traffic, try logging the requested and received events:
var page = require('webpage').create();
page.onResourceRequested = function(request) {
console.log('Request ' + JSON.stringify(request, undefined, 4));
};
page.onResourceReceived = function(response) {
console.log('Receive ' + JSON.stringify(response, undefined, 4));
};

Best way to control firefox via webdriver

I need to control Firefox browser via webdriver. Note, I'm not trying to control page elements (i.e. find element, click, get text, etc); rather I need access to Firefox's profiler and force gc (i.e. I need firefox's Chrome Authority and sdk). For context, I'm creating a micro benchmark framework, not running a normal webdriver test.
Obviously raw webdriver won't work, so what I've been trying to do is
1) Create a firefox extension/add-on that does what I need: i.e.
var customActions = function() {
console.log('calling customActions.')
// I need to access chrome authority:
var {Cc,Ci,Cu} = require("chrome");
Cc["#mozilla.org/tools/profiler;1"].getService(Ci.nsIProfiler);
Cu.forceGC();
var file = require('sdk/io/file');
// And do some writes:
var textWriter = file.open('a/local/path.txt', 'w');
textWriter.write('sample data');
textWriter.close();
console.log('called customActions.')
};
2) Expose my customActions function to a page:
var mod = require("sdk/page-mod");
var data = require("sdk/self").data;
mod.PageMod({
include: ['*'],
contentScriptFile: data.url("myscript.js"),
onAttach: function(worker) {
worker.port.on('callCustomActions', function() {
customActions();
});
}
});
and in myscript.js:
exportFunction(function() {
self.port.emit('callCustomActions');
}, unsafeWindow, {defineAs: "callCustomActions"});
3) Load the xpi during my webdriver test, and call out to global function callCustomActions
So two questions about this process.
1) This entire process is very roundabout. Is there a better practice for talking to a firefox extension via webdriver?
2) My current solution isn't working well. If I run my extension via cfx run directly (without webdriver) it works as expected. However, neither the sdk nor chrome authority do anything when running via webdriver.
By the way, I know my function is being called because the log line "calling customActions." and "called customActions." both do print.
Maybe there are some firefox preferences that I need to set but haven't?
It may be that you do not need the add-on at all. Mozilla uses Marionette for test automation of Firefox OS amongst other things:
Marionette is an automation driver for Mozilla's Gecko engine. It can
remotely control either the UI or the internal JavaScript of a Gecko
platform, such as Firefox or Firefox OS. It can control both the
chrome (i.e. menus and functions) or the content (the webpage loaded
inside the browsing context), giving a high level of control and
ability to replicate user actions. In addition to performing actions
on the browser, Marionette can also read the properties and attributes
of the DOM.
If this sounds similar to Selenium/WebDriver then you're correct!
Marionette shares much of the same ethos and API as
Selenium/WebDriver, with additional commands to interact with Gecko's
chrome interface. Its goal is to replicate what Selenium does for web
content: to enable the tester to have the ability to send commands to
remotely control a user agent.

Unable to get jasmine-jquery fixtures to load in Visual Studio with Chutzpah, or even in browser

I'm prototyping a MVC.NET 4.0 application and am defining our Javascript test configuration. I managed to get Jasmine working in VS2012 with the Chutzpah extensions, and I am able to run pure Javascript tests successfully.
However, I am unable to load test fixture (DOM) code and access it from my tests.
Here is the code I'm attempting to run:
test.js
/// various reference paths...
jasmine.getFixtures().fixturesPath = "./";
describe("jasmine tests:", function () {
it("Copies data correctly", function () {
loadFixtures('testfixture.html');
//setFixtures('<div id="wrapper"><div></div></div>');
var widget = $("#wrapper");
expect(widget).toExist();
});
});
The fixture is in the same folder as the test file. The setFixtures operation works, but when I attempt to load the HTML from a file, it doesn't. Initially, I tried to use the most recent version of jasmine-jquery from the repository, but then fell back to the over 1 year old download version 1.3.1 because it looked like there was a bug in the newer one. Here is the message I get with 1.3.1:
Test 'jasmine tests::Copies data correctly' failed
Error: Fixture could not be loaded: ./testfixture.html (status: error, message: undefined) in file:///C:/Users/db66162/SvnProjects/MvcPrototype/MvcPrototype.Tests/Scripts/jasmine/jasmine-jquery-1.3.1.js (line 103)
When I examine the source, it is doing an AJAX call, yet I'm not running in a browser. Instead, I'm using Chutzpah, which runs a headless browser (PhantomJS). When I run this in the browser with a test harness, it does work.
Is there someone out there who has a solution to this problem? I need to be able to run these tests automatically both in Visual Studio and TeamCity (which is why I am using Chutzpah). I am open to solutions that include using another test runner in place of Chutzpah. I am also going to evaluate the qUnit testing framework in this effort, so if you know that qUnit doesn't have this problem in my configuration, I will find that useful.
I fixed the issue by adding the following setting to chutzpah.json:
"TestHarnessLocationMode": "SettingsFileAdjacent",
where chutzpah.json is in my test app root
I eventually got my problem resolved. Thank you Ian for replying. I am able to use PhantomJS in TeamCity to run the tests through the test runner. I contacted the author of Chutzpah and he deployed an update to his product that solved my problem in Visual Studio. I can now run the Jasmine test using Chutzpah conventions to reference libraries and include fixtures while in VS, and use the PhantomJS runner in TeamCity to use the test runner (html).
My solution on TeamCity was to run a batch file that launches tests. So, the batch:
#echo off
REM -- Uses the PhantomJS headless browser packaged with Chutzpah to run
REM -- Jasmine tests. Does not use Chutzpah.
setlocal
set path=..\packages\Chutzpah.2.2.1\tools;%path%;
echo ##teamcity[message text='Starting Jasmine Tests']
phantomjs.exe phantom.run.js %1
echo ##teamcity[message text='Finished Jasmine Tests']
And the Javascript (phantom.run.js):
// This code lifted from https://gist.github.com/3497509.
// It takes the test harness HTML file URL as the parameter. It launches PhantomJS,
// and waits a specific amount of time before exit. Tests must complete before that
// timer ends.
(function () {
"use strict";
var system = require("system");
var url = system.args[1];
phantom.viewportSize = {width: 800, height: 600};
console.log("Opening " + url);
var page = new WebPage();
// This is required because PhantomJS sandboxes the website and it does not
// show up the console messages form that page by default
page.onConsoleMessage = function (msg) {
console.log(msg);
// Exit as soon as the last test finishes.
if (msg && msg.indexOf("Dixi.") !== -1) {
phantom.exit();
}
};
page.open(url, function (status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit(-1);
} else {
// Timeout - kill PhantomJS if still not done after 2 minutes.
window.setTimeout(function () {
phantom.exit();
}, 10 * 1000); // NB: use accurately, tune up referring to your needs
}
});
}());
I've got exactly the same problem. AFAIK it's to do with jasmine-jquery trying to load the fixtures via Ajax when the tests are run via the file:// URI scheme.
Apparently Chrome doesn't allow this (see https://stackoverflow.com/a/5469527/1904 and http://code.google.com/p/chromium/issues/detail?id=40787) and support amongst other browsers may vary.
Edit
You might have some joy by trying to set some PhantomJS command-line options such as --web-security=false. YMMV though: I haven't tried this myself yet, but thought I'd mention it in case it's helpful (or in case anyone else know more about this option and whether it will help).
Update
I did manage to get some joy loading HTML fixtures by adding a /// <reference path="relative/path/to/fixtures" /> comment at the top of my Jasmine spec. But I still have trouble loading JSON fixtures.
Further Update
Loading HTML fixtures by adding a /// <reference path="relative/path/to/fixtures" /> comment merely loads in your HTML fixtures to the Jasmine test runner, which may or may not be suitable for your needs. It doesn't load the fixtures into the jasmine-fixtures element, and consequently your fixtures don't get cleaned up after each test.

Resources