Test Cases fails if any of them fails - mocha.js

I am trying mocha test case to check rest api, but the issue is if I have 3 test cases and first one fails then rest are not executing. It stops at first only.
Here is following code:
describe('suite 1',function(){
it('tc1',function(done){
// some test case with failure
should([1]).equal([]);
done();
})
it('tc2',function(done){
// some test case with success
should([]).equal([]);
done();
})
})
In above code I am not able to get report like
2 test cases.
1 Passed.
1 Fail.
It fails in middle, in here it fails on first test case only.

Both tests are failing because they both should fail.
The both tests fail because you are using equal which checks for object identity. That is, it uses === to check for the equality. Now, open an interactive Node session and try this:
[] === []
You'll get false. That's because each new empty array is a new JavaScript object and === will be true only if the two arrays are the same object.
Note that you get the result you expect in your first test but not for the reason you (probably) think. The test fails for the same reason I've just explained. The fact that one array contains an element but the other is empty is not taken into account by should.
You should use eql to test whether the arrays have the same members.

Related

Opentelemetry context propagation test

I'm trying to test some opentelemetry spans are correctly build and linked in parent child relations.
clientSpan = Span.wrap(
SpanContext.createFromRemoteParent(
"12345678123456781234567812345678",
"1234567812345678",
TraceFlags.getDefault,
TraceState.getDefault
)
)
.updateName("clientSpan")
And the code which I want to test creates a new span which is the child span of the client span which it receives
tracer.spanBuilder("my-endpoint")
.setSpanKind(SpanKind.SERVER)
.setParent(Context.current().`with`(clientSpan))
.startSpan()
My problem is that whenever I pass the clientSpan the
InMemorySpanExporter.getFinishedSpanItems() returns empty.
It returns non-empty and as expected when no clientSpan is used.
In the unit test i tried both ending the parent span or leaving it open: clientSpan.end() but getFinishedSpanItems is always empty.
Maybe I'm not building correctly the clientSpan in my unit test ? (in prod code it gets extracted from the carrier-propagator-getter combo)
The wrap returns a non-recording span that contains the provided span context but has no functionality. All the operations are no-op. So whatever you do clientSpan has no effect.
Please change the TraceFlags.getDefault() to TraceFlags.getSampled(). The default returns a flag with all bit off which means the no child spans will be considered

Angular Tests break at random: "Uncaught TypeError: You provided 'undefined' where a stream was expected."

We have a medium sized angular app with currently about 700 unit tests.
A few weeks ago, perfectly fine tests started to break. Even stranger: running the tests twice can yield to different results, i.e. different tests may break.
In the console, we always find the error :
Uncaught TypeError: You provided 'undefined' where a stream was expected.
But the stack trace gives no hint to where the root of the error is actually located (see end of this post). The stack trace shows a connection to the mergeMap operator, but it turns out that we use this operator no where in our app and nowhere in our tests.
I stepped through all spec files and let them run on their own (with fdescribe). Every single spec file passes without errors. Running them all together leads to the described breakage.
Of course my guess was that we were facing an async problem so I took the effort to go through all the tests and wrap each one of them in an async environment. I also checked that every subscription gets unsubscribed at some point - this was the case for our app but not always for our tests.
However, the error still persists.
It's a big issue for our project. Any advice is very welcome.
Maybe somebody knows a way to locate the part of our tests that is causing the problem?
We now use jasmine 3.3.0, karma v3.1.4 and Angular 7.1.3.
We did the update of jasmine and angular a week ago because we hoped to get rid of the problem. Only one thing changed: before the update, tests didn't break at random but at a fixed number of tests (in our case, 639 Tests would cause a test to break, 638, 640, 641... etc would pass; 648 would break again). I assume it has something to do with the random seed that jasmine is now using.
Here is the full stack trace:
<!-- language: lang-none -->
Uncaught TypeError: You provided 'undefined' where a stream was expected. You can provide an Observable, Promise, Array, or Iterable.
at subscribeTo (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/util/subscribeTo.js:41)
at subscribeToResult (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/util/subscribeToResult.js:11)
at MergeMapSubscriber.push../node_modules/rxjs/_esm5/internal/operators/mergeMap.js.MergeMapSubscriber._innerSub (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/operators/mergeMap.js:74)
at MergeMapSubscriber.push../node_modules/rxjs/_esm5/internal/operators/mergeMap.js.MergeMapSubscriber._tryNext (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/operators/mergeMap.js:68)
at MergeMapSubscriber.push../node_modules/rxjs/_esm5/internal/operators/mergeMap.js.MergeMapSubscriber._next (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/operators/mergeMap.js:51)
at MergeMapSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber.next (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/Subscriber.js:54)
at Observable._subscribe (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/util/subscribeToArray.js:5)
at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable._trySubscribe (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/Observable.js:43)
at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable.subscribe (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/Observable.js:29)
at MergeMapOperator.push../node_modules/rxjs/_esm5/internal/operators/mergeMap.js.MergeMapOperator.call (:9876/_karma_webpack_/webpack:/node_modules/rxjs/_esm5/internal/operators/mergeMap.js:29)
at ____________________Elapsed_3_ms__At__Thu_Dec_27_2018_10_03_35_GMT_0100__Mitteleurop_ische_Normalzeit_ ()
at Object.onScheduleTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone-testing.js:108)
at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.scheduleTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:401)
at Object.onScheduleTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:297)
at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.scheduleTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:401)
at Zone.push../node_modules/zone.js/dist/zone.js.Zone.scheduleTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:232)
at Zone.push../node_modules/zone.js/dist/zone.js.Zone.scheduleMacroTask (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:255)
at scheduleMacroTaskWithCurrentZone (:9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:1114)
at :9876/_karma_webpack_/webpack:/node_modules/zone.js/dist/zone.js:2090
Oooof, sounds like things have turned flaky. We had a run in with random breaking of unit tests recently. Have you been updating your Angular and Karma versions consistently?
What we ran into is that the way unit tests are setup by default (by the Angular CLI) has changed, and that older tests were not running the proper async ways.
The error you are seeing does differ from what we saw, but I'm certain this is an avenue worth exploring to remove any flakiness introduced by the unit tests setup.
As taken from https://angular.io/guide/testing#calling-compilecomponents
describe('BannerComponent', () => {
let component: BannerComponent
let fixture: ComponentFixture<BannerComponent>
beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [ BannerComponent ],
}).compileComponents(); // compile template and css
}));
beforeEach(() => {
fixture = TestBed.createComponent(BannerComponent)
component = fixture.componentInstance
fixture.detectChanges()
})
it('should create', () => {
expect(component).toBeTruthy()
})
Extra attention for the first beforeEach() which has an async() => {} in there, and a required .compileComponent().
The second beforeEach() is to define and populate the component variable within the shared context of the describe().
I hope this helps you figure out what is causing the flakiness. As the iterator issue stemming from RxJS seems to be pointing towards a test that is relying on state being set by a previous test, where it receives an input in the form of an Observable. If this Observable is set or defined later than the tests execution, you may be running into issues like the one you're describing.
It may be caused by async execution order of Jasmine test cases. In older versions of Jasmine, async execution order was set to false by default. But in recent versions of Jasmine, async execution order was set to true by default.
Primary Reasons: Variable has been overridden in other test case which has executed before this test case.
Solutions:
We need to find out why our variable is getting undefined. Put a console above statement where undefined is thrown. Reinitialize that variable using beforeEach.
Set random to false in your karma.config.js ex. https://github.com/karma-runner/karma-jasmine

How to validate an element is gone using webdriverIO, Mocha and Chai

I'm working on some automated tests, using webdriverIO, Mocha and Chai. I keep running into the same problem when I want to verify if an element is deleted.
I'm working with a shoppingbasket where i delete an item and then verify that it is gone. It takes a while for the item to be gone though, so if I immediately go to the expect, the item is still there.
I have solved this by doing this:
browser.waitForExist(deletedProduct, 5000, true)
expect (boodschappenLijstPage.isProductPresent(SKU), 'the removed item was still there' ).to.equal(false)
The webdriverIO waitfor command waits for the product to dissapear, and after that the chai expect checks if it is gone.
The problem I have with this is that the expect will never fail. If the product is not properly deleted the waitfortimeout will throw an error before I get to the expect part, meaning that the expect part is only reached if the product is gone
I have read through the docs for chai, but I can't seem to find a way of doing this.
Can anyone show be a way of waiting for the product to be gone, without missing the expect (I don't want to use browser.pause for obvious reasons)
Refer this
webElement.waitForDisplayed({ reverse: true });
You can use try catch and basically wait for error. When element disappears from DOM selenium will throw NoSuchElementError and we can use it.
isNotPresent(element) {
try {
return !element.isVisible()
} catch (error) {
return true
}
}
// or wait for element to disappear from dom
waitForNotVisible(element) {
browser.waitUntil(() => {
try {
return !element.isVisible()
} catch (error) {
return true
}
})
}
If you're trying to validate that an element is gone, then using expect is the correct solution, but you should use the Webdriver-expect assertions instead of the chai expect assertions. While chai assertions check the state immediately, the WebdriverIO-expect assertions have the waitFor functionality built inside of it. Here is an example:
let deletedProduct = $(/* your xpath/CSS selector/*);
expect(deletedProduct).not.toBeDisplayed();
The difference between the assertion and using waitForDisplayed with the reverse flag is that some reporters, such as Allure, will report your test as broken when instead the test should be reported as failed. For example, when we ran tests and had the waitForDisplayed, all of our failing tests were marked in yellow. When we use expect, the tests are marked in red.
Here is the documentation for the WebdriverIO Expect matchers. They didn't document the .not method very clearly, but in my example you can see I added the .not before the toBeDisplayed call.
Again, this expect will wait for the timeout specified in the wdio.conf.js, just like the waitFors will.

Custom matcher not asserting in Astrolabe/Protractor + Jasmine test

I'm writing some page-object driven tests using Protractor and Astrolabe.
Jasmine is being used to implement describe/it style specs.
Adding custom matchers won't work using this.addMatchers (TypeError: Object #<Object> has no method 'toContainLowered'), so I used this guide to implement them.
It seems to be working, until I look closely at the output of my test run:
$> grunt test:func
Running "test:func" (test) task
Running "shell:protractor" (shell) task
Using the selenium server at http://localhost:4444/wd/hub
..
Finished in 6.727 seconds
2 tests, 1 assertion, 0 failures
Here is my code:
var loginPage = require('./../pages/loginPage');
describe('Login page', function () {
var ptor = loginPage.driver;
beforeEach(function () {
jasmine.Matchers.prototype.toContainLowered = function (expected) {
return this.actual.toLowerCase().indexOf(expected) > -1;
};
loginPage.go();
ptor.waitForAngular();
});
it('should display login page', function () {
expect(loginPage.currentUrl).toEqual(ptor.baseUrl);
});
it('should display an error when the username or password is incorrect', function() {
loginPage.login('bad', 'credentials');
ptor.waitForAngular();
expect(loginPage.lblError.getText()).toContainLowered('invalid username and/or password');
// expect(loginPage.lblError.getText()).toContain('Invalid Username and/or Password');
});
});
If I uncomment the last line and remove the toContainLowered matcher, I get the proper output:
2 tests, 2 assertions, 0 failures
I'm having a really difficult time debugging this promise-based code, and any efforts to put a console.log(this.actual.toLowerCase().indexOf(expected) > -1); will print false, which is confusing.
I even tried replacing the entire function definition with just return false;. Which still does not do anything. Finally, I tried passing no argument to the matcher, which should have thrown an Invalid Argument Error or something.
How do I define my own matchers in Jasmine when using Protractor/Astrolabe tests?
I've had similar problems with matchers, in particular with the .not matchers, which all seem to not work. I hypothesise that protractor is extending the Jasmine matchers to deal with the promises, and that that extension hasn't been applied to the .not, or to the custom matchers.
In my case, I wanted a .not.toMatch, and so I just wrote a convoluted regex that gave me what I wanted, with the not embedded in the regex.
I note that your matcher is called "toContainLowered", so perhaps you're looking for lowercase, and therefore you could instead do this with a regex by using .toMatch?
The issue I raised on this on the protractor github is here: https://github.com/angular/protractor/issues/266
I also see, in this code file: https://github.com/angular/protractor/blob/master/jasminewd/spec/adapterSpec.js, that the last commit is marked as "patched matcher should understand not". That might either fix the custom matchers for you, or provide an indication of what needs to be done to fix that custom matcher.
EDIT: now looking further into that issue thread, I see you've already been there. Which makes my answer somewhat superfluous. :-)

Autorerun failed rspec tests

I have some code in my tests which works with some external service. This service is not very stable, so sometimes it crash for no reason. But in 80% of runs it works well.
So I want a method to automatically rerun all failed rspecs several time (for example 2 or 3 time).
Is there any way to do it?
Many people would say that your test actually should never hit external services, and that's one of the reasons to do it. Your tests should not fail because some external service is down.
TL;DR use mocks and stubs or to replace those external service calls
Instead of re-running failed specs, couldn't you just run the method accessing the service a set number of times and run the expectation on the logical OR of the results?
So instead of:
it "returns expected value for some args" do
unstable_external_service(<some args>).should == <expected return value>
end
just do something like this:
def run_x_times(times, args)
return nil if times == 0
unstable_external_service(args) || run_x_times(times-1)
end
it "returns expected value for some args" do
run_x_times(10, <some args>).should == <expected return value>
end
You can use the same wrapper method throughout your tests anytime you access the service. I'm assuming here that your service returns nil on a failure, but if not you could change this to fit your particular case -- you get the general idea.

Resources