Do not report skipped tests with karma-jasmine-html-reporter - jasmine

Is there a way to prevent karma-jasmine-html-reporter aka kjhtml from reporting skipped/pending tests?
I run some tests using fit and fdescribe and I want to only see results for the selected tests, however, the reporter is always displaying all tests from the suite.

Apparently, yes, there's a way to that with Jasmine (starting from v3.3.0). I've been able to do that in an Angular project. In the test.ts I've put something like:
jasmine.getEnv().configure({ hideDisabled: true });
Official documentation here: https://jasmine.github.io/api/3.5/Configuration.html.

Related

Cypress with Jasmine

We are in a process of migrating existing Protractor scripts(with Jasmine Framework) to Cypress.
I would like to know if we can use the Jasmine in Cypress also. As, Cypress by default uses Mocha.., so need a clarification if we can install Jasmine dependences along with Cypress to define the tests with Framework.
I don't think so. Cypress modifies/patches the Mocha hooks like beforeEach() and also the chai expect() to work with their framework.
Is there anything about Jasmine that you don't get out of the box with Cypress? I believe the expect() syntax may be different, if you have too many Jasmine-style expectations to change you may be able to add custom Chai expressions so that they work without modification.

Why we need `afterEach(cleanup);`?

This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!

Extracting Test outcomes serenity BDD

Went through the serenity documented for extracting the test outcomes
below isthe code, it didn't work
OutcomeFormat format = OutcomeFormat.XML; TestOutcomes outcomes =
TestOutcomeLoader.loadTestOutcomes().inFormat(format)
Tried with below code and its working,
OutcomeFormat format = OutcomeFormat.JSON; TestOutcomeLoaderBuilder
outcomes= TestOutcomeLoader.loadTestOutcomes().inFormat(format);
TestOutcomes out =outcomes.from(new File(""));
Issue is i need the test outcomes in #AfterScenario, but the thing is serenity reports gets generated after the entire execution tried changing the pom but didn't help. Is there any other way using which we can extract the test results?
Serenity uses JSON format by default now. Why are you trying to obtain the test outcomes? (i.e. what problem are you trying to solve?)
Created a separate java class for report extracting and added that in maven plugin which will get executed after serenity report is generated.
As #John smart has mentioned JSON and HTML are default output formats.
Still if you want to access Outcomes after test execution.
You can create a custom listener and listen to serenity event bus.
TestRunFinished event will be published with Outcome as a parameter.
You can use the outcome for getting required details.
For creating custom listener you can follow this page

Custom code for #javascript tags in Capybara Webkit

I have some test set up code that I need to run before any Capybara tests that are running JavaScript with the #javascript tag. I don't want the code to run the rest of the time since this test set up is expensive in terms of system resources and cognitive load.
I've searched the documentation extensively and was unable to find any examples of running arbitrary ruby before tests based in tagging. Can anyone help me out?
Edit: after thinking about this some more, I only need the code to run once before any tests are run, so this is probably a simpler problem then I first described.
Since you're asking about an #javascript tag I'm assuming you're talking about Cucumber driven tests, if you're not then please clarify.
To run code before a test you use Before
Before('#javascript') do
# any code here will get run before each test tagged with #javascript
end
To make it only run that code once you'd need to use a global variable
Before('#javascript') do
$already_run ||= false
return $already_run if $already_run
# code here will get run once before the first test tagged #javascript
$already_run = true
end

Stop jasmine test after first expect fails

I'm familiar with python unittest tests where if an assertion fails, that test is marked as "failed" and it moves on to other tests. Jasmine on the other hand will continue through all expects even if the one of them fails. How can I make Jasmine stop processing a test after the first expectation fails?
it ("shouldn't need to test other expects if the first fails", function() {
expect(array.length).toBe(1);
// don't need to check this if the first failed.
expect(array[0]).toBe("foo");
});
Am I thinking about it wrong? I have some tests with lots of expect's and it seems like a waste to show all the stack traces when only the first is wrong really.
#Gregg's answer was correct for the latest version of Jasmine at that time (v2.0.0).
However, since then, this new feature was added in v2.3.0:
Allow user to stop a specs execution when an expectation fails (Fixes #577)
It's activated by adding throwFailures=true to the query string of the runner page, eg:
http://localhost:8000/?throwFailures=true
Jasmine doesn't support failing early, in a single spec. The idea is to give you all of the failures in case that helps figure out what is really wrong in your spec.
Jasmine has stop on failure feature and you can check it here:
https://plnkr.co/plunk/Ko5m6MQz9VUPMMrC
This starts jasmine with oneFailurePerSpec property.
According to the comments of https://github.com/jasmine/jasmine/issues/414 I figured out that 2 solutions exists for this:
https://github.com/radialanalytics/protractor-jasmine2-fail-whale
https://github.com/Updater/jasmine-fail-fast
I just started to use the protractor-jasmine2-fail-whale because it seems to have more features. Although to take screenshots in case of test failures I currently use protractor-jasmine2-html-reporter.
I'm using Jasmine in Appium (a tool to test React Native apps).
I fixed the issue by adding stopSpecOnExpectationFailure=true to Jasmine configs
jasmine v3.8.0 & jasmine-core v3.8.0

Resources