I am writing an ESLint plugin and I have been testing with eslint.RuleTester. This is okay, I know how to configure their test options argument but I always have to run the entire test file.
here is an example test file:
const {RuleTester} = require('eslint');
const rulester = new RuleTester({setup});
const rule = require ('myrule');
//this works also but i have to run the entire file (and thus all the tests)
ruleTester.run('myruletest',rule,{invalid,valid});
Normally, when I install a test runner I get a run/configuration for it and handy play⏯ and debug🐞 buttons in line with each test. As I write more tests (particularly in the same file) it would be handy to quickly click a => beside a test and run just that single test.
If I try to call ruletester.run from a mocha.it callback it will not report the test correctly (and definitely cannot debug / step into it).
e.g. this does not work well
const mocha = require('mocha');
const {RuleTester} = require('eslint');
const rulester = new RuleTester({setup});
const rule = require ('myrule');
// nice play button and custom run configruation but not honest test feeback
it('mytest', ()=>{
// it'll run this but will not report correctly -- `it` says it always passes
ruleTester.run('myruletest, rule, {invalid,valid});
});
it('mytest', async ()=>{
// async is of no help
await ruleTester.run('myruletest, rule, {invalid,valid});
});
//this still works also but then i have to run the entire file (and thus all the tests)
ruleTester.run('myruletest',rule,{invalid,valid});
So how do I tell WebStorm to either
recognize eslint.RuleTester as a testrunner
properly call and instance of RuleTester from my own testrunner?
Recognizing eslint.RuleTester as a test runner would require developing a special plugin.
See http://www.jetbrains.org/intellij/sdk/docs/ for basic documentation on plugin development. You can use existing plugins as example: Mocha runner is not fully open source, but its JavaScript part is open source (https://github.com/JetBrains/mocha-intellij); also, there is an open source plugin for Karma https://github.com/JetBrains/intellij-plugins/tree/master/js-karma
Related
I'm trying to use cy.writeFile to write my fixture files against an api. I need these fixture files to be generated before any Cypress tests run, since all the tests will use these fixture files. I only need this to be run one time, before any tests run, not before each test.
I've tried adding a before function to the /cypress/support/index.js file but it doesn't create the fixture files when I run "cypress run".
import './commands'
before(function() {
// runs once before all tests in the block
const apiUrl = 'http://localhost:8080/api/';
const fixturesPath = 'cypress/fixtures/';
const fixtureExtension = '.json';
let routePath = 'locations';
cy.request(`${apiUrl}${routePath}`).then((response) => {
cy.writeFile(`${fixturesPath}${path}${fixtureExtension}`, response.body);
});
});
Shouldn't this before hook run before any of my tests using "cypress run" run?
Yes, it should run before any of your tests.
The fixture is not created because the request fails. It's due to any number of reasons, for instance: the api is not ready when Cypress runs, or it requires authentication ... you'd better double-check that.
I made an example here. Before starting cypress yarn cy:run, you have to make sure both the api server (yarn start:mock) and webserver (yarn start) are ready.
Another note that the before() function in support/index.js is not run one time because it's included in every test suite so let's say you have 3 test files, then it's executed 3 times.
May I know what is the difference between Zombie.js and Jasmine? are they both frameworks?
Jasmine is a unit test framework for BDD (behavior driven development). It requires an execution environment like NodeJs or a browser like Firefox, Chrome, IE, PhantomJS etc. to run (and to provide the environment for the code under test). Jasmine provides the test execution and assertion infrastructure (that's the describe(), it(), expect()).
Zombie.js is an emulated, headless browser. It's a browser on its own plus an interaction API for itself. It's like Selenium/Webdriver. It's using jsdom under its hood to provide the APIs browsers usually provide. Zombie.js requires a test execution and assertion infrastructure (like Mocha + should.js or even Jasmine).
With Jasmine you write tests on a module or group-of-modules level. But usually not on an application level
With Zombie.js you interact with a website (served by a server) through an interaction API.
With Jasmine you make fine grained assertions on the output or events created for certain input - on the module level.
With Zombie.js you interact with the whole application (or website).
With Jasmine you test only the Javascript part.
With Zombie.js you test the the frontent + backend. Though you might be able to mock away and intercept server interaction (maybe, I'm not familar with it).
With Jasmine you call a method/function, pass a parameter and test the return value and events
With Zombie.js you load a page and fill a form and test the output
With Jasmine you need to run the tests in the proper execution envrionment (like Firefox, Chrome, ...)
With Zombie.js you pages runs in a new execution environment
With Jasmine you can test in browsers (consumers use) with their typical quirks
With Zombie.js you test you application in a new browser with new quirks
Jasmine example:
// spy on other module to know "method" was called on it
spyOn(otherModule, "method");
// create module
let module = new Module(otherModule),
returnValue;
// calls otherModule.method() with the passed value too; always returns 42
returnValue = module(31415);
// assert result and interaction with other modules
expect(returnValue).toBe(42);
expect(otherModule.method).toHaveBeenCalledWith(31415);
Zombie.js example:
// create browser
const browser = new Browser();
// load page by url
browser.visit('/signup', function() {
browser
// enter form data by name/CSS selectors
.fill('email', 'zombie#underworld.dead')
.fill('password', 'eat-the-living')
// interact, press a button
.pressButton('Sign Me Up!', done);
});
// actual test for output data
browser.assert.text('title', 'Welcome To Brains Depot');
Zombie.js, like Webdriver/Selenium, is no replacement for a unit testing framework like Jasmine, Mocha.
I want to run Jasmine unit test in combination with JSTest.NET, so that I can execute my test runs in VisualStudio with MSTest. This is obligatory for me, as our teams build system (TFS) workflow cannot be extended/changed (for organisational reasons) to use Jasmine's SpecRunner.html or some other way to run Jasmine tests.
Thus, JSTest.NET seems to do the trick for me, as it is a bridge betweeen javascript and MSTest.
Therefore, I my first step was to write this MSTest:
[DeploymentItem(#"Scripts\jasmine\jasmine.js")]
[DeploymentItem(#"Scripts\jasmine\jasmine-html.js")]
[DeploymentItem(#"Scripts\jasmine\boot.js")]
[TestMethod]
public void SimpleJasmineTest()
{
_script.AppendFile("jasmine.js");
_script.AppendFile("jasmine-html.js");
_script.AppendFile("boot.js");
_script.AppendBlock(#"
describe('Hello world', function() {
it('should be nice here', function() {
expect('world'.length).toBe(5);
});
})");
_script.RunTest(#"
");
}
When executing this test, I get a "runtime error in JScript: 'window' is undefined", which is obvious as there is no browser in the game that could provide a window object.
Anyone can kick me into the right direction?
I found that using Chutzpah's solution (instead of JSTest.NET) is practical for my needs, following this tutorial:
http://aspnetperformance.com/post/Unit-testing-JavaScript-as-part-of-TFS-Build.aspx
I am trying to run my API test via a vbscript file based on Automation Object Model. I am able to launch, open and run my GUI tests but for API tests I get an Error "cannot open test" code: 800A03EE.
I have read somewhere that my testcase is probably corrupted, so I saved the test as a new one but still doesn't work.
Following is my vbscript:
testPath = "absolute address to my API-test folder"
Set objUFTapp = CreateObject("QuickTest.Application")
objUFTapp.Launch
objUFTapp.Visible = TRUE
objUFTapp.Open testPath, TRUE '------> throws the error
Set pDefColl = qtApp.Test.ParameterDefinitions
Set rtParams = pDefColl.GetParameters()
Set rtParam = rtParams.Item("param1")
rtParam.Value = "value1"
objUFTapp.Test.Run uftResultsOpt,True, rtParams
objUFTapp.Test.Close
objUFTapp.Quit
For some unknown reason, I was also facing similar issue.
As a workaround, I created one GUI test from which I was calling API test like this:
RunAPITest "API_Test_Name"
To do so:
1. Create new GUI test
2. Go to Design -> Call to existing API test
3. Provide path to your API test in Test path
4. Select <Entire Test> for Call to
5. You can pass any Input or Output parameter from this screen
5. Click OK
Now, you can use your own VBScript to call this GUI test which will run your desired API test.
I know it's not good idea to do so, but it will get the job done.
By time of UFT installation we can opt for an additional automation tool, LeanFT.
As the main feature of LeanFT, we can have the test environment right next to our development environment, either in Java(Eclipse) or C#.net(Visual Studio). Also we are provided with an object identification tool (GUI spy) which makes it possible to develop GUI test not in VBScript anymore but in one of the most powerful modern languages (Java or C#). With this very short summary, let's have a look at how actually we can execute API tests outside of UFT IDE.
After a successful installation of LeanFT tool, we can create a LeanFT project in our Eclipse or Visual Studio. Create a new LeanFT project
C# code:
using HP.LFT.SDK;
using HP.LFT.SDK.APITesting.UFT;
......
[TestMethod]
public void TestMethod1()
{
Dictionary<string, object> InputParameters = new Dictionary<string, object>();
InputParameters.Add("environment", "TEST");
APITestResult ExecutionResult = APITestRunner.Run("UFT Test Path" , InputParameters);
MessageBox.Show(ExecutionResult.Status.ToString());
.....
}
For sure above code is just to give you an insight although it works pretty fine. For better diagnistic, we can take advantage of other libraries like "HP.LFT.Verifications" for checking the result
Important: You cannot use UFT and LeanFT at the same time as your runtime engine!
I'm prototyping a MVC.NET 4.0 application and am defining our Javascript test configuration. I managed to get Jasmine working in VS2012 with the Chutzpah extensions, and I am able to run pure Javascript tests successfully.
However, I am unable to load test fixture (DOM) code and access it from my tests.
Here is the code I'm attempting to run:
test.js
/// various reference paths...
jasmine.getFixtures().fixturesPath = "./";
describe("jasmine tests:", function () {
it("Copies data correctly", function () {
loadFixtures('testfixture.html');
//setFixtures('<div id="wrapper"><div></div></div>');
var widget = $("#wrapper");
expect(widget).toExist();
});
});
The fixture is in the same folder as the test file. The setFixtures operation works, but when I attempt to load the HTML from a file, it doesn't. Initially, I tried to use the most recent version of jasmine-jquery from the repository, but then fell back to the over 1 year old download version 1.3.1 because it looked like there was a bug in the newer one. Here is the message I get with 1.3.1:
Test 'jasmine tests::Copies data correctly' failed
Error: Fixture could not be loaded: ./testfixture.html (status: error, message: undefined) in file:///C:/Users/db66162/SvnProjects/MvcPrototype/MvcPrototype.Tests/Scripts/jasmine/jasmine-jquery-1.3.1.js (line 103)
When I examine the source, it is doing an AJAX call, yet I'm not running in a browser. Instead, I'm using Chutzpah, which runs a headless browser (PhantomJS). When I run this in the browser with a test harness, it does work.
Is there someone out there who has a solution to this problem? I need to be able to run these tests automatically both in Visual Studio and TeamCity (which is why I am using Chutzpah). I am open to solutions that include using another test runner in place of Chutzpah. I am also going to evaluate the qUnit testing framework in this effort, so if you know that qUnit doesn't have this problem in my configuration, I will find that useful.
I fixed the issue by adding the following setting to chutzpah.json:
"TestHarnessLocationMode": "SettingsFileAdjacent",
where chutzpah.json is in my test app root
I eventually got my problem resolved. Thank you Ian for replying. I am able to use PhantomJS in TeamCity to run the tests through the test runner. I contacted the author of Chutzpah and he deployed an update to his product that solved my problem in Visual Studio. I can now run the Jasmine test using Chutzpah conventions to reference libraries and include fixtures while in VS, and use the PhantomJS runner in TeamCity to use the test runner (html).
My solution on TeamCity was to run a batch file that launches tests. So, the batch:
#echo off
REM -- Uses the PhantomJS headless browser packaged with Chutzpah to run
REM -- Jasmine tests. Does not use Chutzpah.
setlocal
set path=..\packages\Chutzpah.2.2.1\tools;%path%;
echo ##teamcity[message text='Starting Jasmine Tests']
phantomjs.exe phantom.run.js %1
echo ##teamcity[message text='Finished Jasmine Tests']
And the Javascript (phantom.run.js):
// This code lifted from https://gist.github.com/3497509.
// It takes the test harness HTML file URL as the parameter. It launches PhantomJS,
// and waits a specific amount of time before exit. Tests must complete before that
// timer ends.
(function () {
"use strict";
var system = require("system");
var url = system.args[1];
phantom.viewportSize = {width: 800, height: 600};
console.log("Opening " + url);
var page = new WebPage();
// This is required because PhantomJS sandboxes the website and it does not
// show up the console messages form that page by default
page.onConsoleMessage = function (msg) {
console.log(msg);
// Exit as soon as the last test finishes.
if (msg && msg.indexOf("Dixi.") !== -1) {
phantom.exit();
}
};
page.open(url, function (status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit(-1);
} else {
// Timeout - kill PhantomJS if still not done after 2 minutes.
window.setTimeout(function () {
phantom.exit();
}, 10 * 1000); // NB: use accurately, tune up referring to your needs
}
});
}());
I've got exactly the same problem. AFAIK it's to do with jasmine-jquery trying to load the fixtures via Ajax when the tests are run via the file:// URI scheme.
Apparently Chrome doesn't allow this (see https://stackoverflow.com/a/5469527/1904 and http://code.google.com/p/chromium/issues/detail?id=40787) and support amongst other browsers may vary.
Edit
You might have some joy by trying to set some PhantomJS command-line options such as --web-security=false. YMMV though: I haven't tried this myself yet, but thought I'd mention it in case it's helpful (or in case anyone else know more about this option and whether it will help).
Update
I did manage to get some joy loading HTML fixtures by adding a /// <reference path="relative/path/to/fixtures" /> comment at the top of my Jasmine spec. But I still have trouble loading JSON fixtures.
Further Update
Loading HTML fixtures by adding a /// <reference path="relative/path/to/fixtures" /> comment merely loads in your HTML fixtures to the Jasmine test runner, which may or may not be suitable for your needs. It doesn't load the fixtures into the jasmine-fixtures element, and consequently your fixtures don't get cleaned up after each test.