I have a redux-observable epic that polls an endpoint, getting progress updates until the progress is 100%. The polling interval is acheived using debounceTime like so:
function myEpic(action$, store, dependencies) {
return action$.ofType('PROCESSING')
.do(action => console.log(`RECEIVED ACTION: ${JSON.stringify(action)}`))
.debounceTime(1000, dependencies.scheduler)
.mergeMap(action => (
dependencies.ajax({ url: action.checkUrl })
.map((resp) => {
if (parseInt(resp.progress, 10) === 100) {
return { type: 'SUCCESS' };
}
return { checkUrl: resp.check_url, progress: resp.progress, type: 'PROCESSING' };
})));
}
This works fine but I'd like to write an integration test that tests the state of the store when progress is at 25%, then at 50%, then at 100%.
In my integration tests I can set dependencies.scheduler to be new VirtualTimeScheduler().
This is how I'm trying to do it at the moment (using jest):
describe('my integration test', () => {
const scheduler = new VirtualTimeScheduler();
beforeEach(() => {
// Fake ajax responses
const ajax = (request) => {
console.log(`FAKING REQUEST FOR URL: ${request.url}`);
if (request.url === '/check_url_1') {
return Observable.of({ progress: 25, check_url: '/check_url_2' });
} else if (request.url === '/check_url_2') {
return Observable.of({ progress: 50, check_url: '/check_url_3' });
} else if (request.url === '/check_url_3') {
return Observable.of({ progress: 100 });
}
return null;
};
store = configureStore(defaultState, { ajax, scheduler });
});
it('should update the store properly after each call', () => {
store.dispatch({ checkUrl: '/check_url_1', progress: 0, type: 'PROCESSING' });
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 25');
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 50');
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 100');
});
});
My expected output would be:
RECEIVED ACTION: {"checkUrl":"/check_url_1","progress":0,"type":"PROCESSING"}
FAKING REQUEST FOR URL: /check_url_1
CHECK CORRECT STATE FOR PROGRESS 25
RECEIVED ACTION: {"checkUrl":"/check_url_2","progress":25,"type":"PROCESSING"}
FAKING REQUEST FOR URL: /check_url_2
CHECK CORRECT STATE FOR PROGRESS 50
RECEIVED ACTION: {"checkUrl":"/check_url_3","progress":50,"type":"PROCESSING"}
# CHECK CORRECT STATE FOR PROGRESS 100
But instead the output I get is
RECEIVED ACTION: {"checkUrl":"/check_url_1","progress":0,"type":"PROCESSING","errors":null}
FAKING REQUEST FOR URL: /check_url_1
RECEIVED ACTION: {"checkUrl":"/check_url_2","progress":25,"type":"PROCESSING","errors":null}
CHECK CORRECT STATE FOR PROGRESS 25%
CHECK CORRECT STATE FOR PROGRESS 50%
CHECK CORRECT STATE FOR PROGRESS 100%
At which time the test finishes. I'm configuring the store so that I can mock ajax requests and the scheduler used for debounceTime like as recommended here
So my question is how can I test the state of my store after each of the three ajax requests?
Interestingly enough, I played around with your code and am fairly confident you just found a bug in the debounceTime operator, which causes the apparent swallowing the scheduled debounce. The bad news is that even if that bug is fixed, you're code still wouldn't do what you're looking for order wise.
Bear with me as shit is about to get real:
Epic receives action PROCESSING and schedules debounce, yielding execution to your test
Your test calls scheduler.flush() and the VirtualScheduler executes the scheduled debounce work, which will pass along the original PROCESSING action to the mergeMap
Fake ajax is made, which synchronously emits a response
Response is mapped to the second PROCESSING action
Your epic emits that second action synchronously
The second action is recursively received by your epic and given to the debounce
The debounceTime operator now schedules that second action on the VirtualScheduler but the debounceTime operator is in the middle of executing the previously scheduled work still from the first action.
The call stack unwinds a bunch up until it reaches inside the previously scheduled debounce work from the first action that had just next()'d the first action. The rxjs code for debounceTime then sets this.lastValue = null and this.hasValue = false This is the rxjs bug, it needs to be done before nexting into the destination
The stack unwinds some more to the running flush() method of the VirtualScheduler, which now dequeues the second scheduled debounced action because it was added the scheduled work array synchronously, before this the flushing finished. Remember, we've only called scheduler.flush() ONCE so far, which is the function we're in back in at this point.
The second scheduled debounce work is run, but this.hasValue === false because the first scheduled one set it, so the debounceTime operator does not emit anything.
Stack unwinds to our first scheduler.flush()
We console.log('CHECK CORRECT STATE FOR PROGRESS 25')
All the other scheduler.flush() calls do nothing as there's nothing scheduled.
This is technically a bug, but it's not surprising that no one has run into it since running debounce synchronously without any delay defeats the point of it, except when you're testing, of course. This ticket is basically the same thing and OJ says RxJS doesn't make reentrancy guarantees, but I that might be up for debate in this case. I've filed a PR with the fix to discuss
Remember, this bug wouldn't have solved your underlying question about the ordering, but would have prevented the actions from being swallowed.
Off the top of my head I'm not sure how you would do what you'd like to do specifically if you want to maintain 100% synchronous behavior (VirtualScheduler). You'd need some way of yielding to your test in between debounces. For me when and if I write integration tests I mock out very little, if anything. e.g. let the debounces actually debounce either naturally or by mocking out setTimeout to advance them quicker but still keeping them async which will yield back to your test allowing you to check the state, but making your test also async.
For anyone wanting to reproduce, here's the StackBlitz code I used
The answer was to re-write the test asynchronously. Also note-worthy is that I had to mock the ajax requests by returning an Observable.fromPromise rather than just a regular Observable.of, otherwise they would still get swallowed up by the debounce. Something along these lines (using jest):
describe('my integration test', () => {
const scheduler = new VirtualTimeScheduler();
beforeEach(() => {
// Fake ajax responses
const ajax = request => (
Observable.fromPromise(new Promise((resolve) => {
if (request.url === '/check_url_1') {
resolve({ response: { progress: 25, check_url: '/check_url_2' } });
} else if (request.url === '/check_url_2') {
resolve({ response: { progress: 50, check_url: '/check_url_3' } });
} else {
resolve({ response: { progress: 100 } });
}
}))
);
store = configureStore(defaultState, { ajax, timerInterval: 1 });
});
it('should update the store properly after each call', (done) => {
let i = 0;
store.subscribe(() => {
switch (i) {
case 0:
console.log('CHECK CORRECT STATE FOR PROGRESS 0');
break;
case 1:
console.log('CHECK CORRECT STATE FOR PROGRESS 25');
break;
case 2:
console.log('CHECK CORRECT STATE FOR PROGRESS 50');
break;
case 3:
console.log('CHECK CORRECT STATE FOR PROGRESS 100');
done();
break;
default:
}
i += 1;
});
store.dispatch({ checkUrl: '/check_url_1', progress: 0, type: 'PROCESSING' });
});
});
I also set the timer interval to 1 by passing it as a dependency. In my epic I set it like this: .debounceTime(dependencies.timerInterval || 1000)
Related
I have a short question.
Does inertia render asynch?
I realized, as soon I delete a DB - Entry, while I connect to new Nav-Link direct afterwards (Inertia.onStart)- which redirects me on another Page, the changes (onSuccess) wont be showed up.
Inertia.post('data-delete', {
id: this.meeeh.data[index].id,
}, {
preserveScroll: true,
onBefore: () => {
window.Toast.confirm('Delete?');
},
onStart: (visit) => {
window.Toast.load('Delete...');
},
onSuccess: (page) => {
return Promise.all([
window.Toast.success(page.props.toast),
/** Wont show after click another Link in Navbar */
])
},
onError: (errors) => {
window.Toast.error(errors);
}
});
How does it come, I have to wait until the process is Finished - otherwise my Page is not working correctly?
Not sure if I understand what you're looking for.
onSuccess runs immediately after the post request has finished AND in successful. It is completely separated from other links and it's purpose (if your returning a Promise from it) is to delay the execution of the onFinish handler.
From the docs:
It's also possible to return a promise from the onSuccess() and
onError() callbacks. This will delay the "finish" event until the
promise has resolved.
I also believe there's some problem in your code: Promise.all should receive an array os Promises and I'm pretty sure window.Toast.success(page.props.toast) isn't returning one, is it?
So... chances are that your Promise.all is never resolving.
I have the following need to test whether something does not happen.
While testing something like that may be worth a discussion (how long wait is long enough?), I hope there would exist a better way in Jest to integrate with test timeouts. So far, I haven't found one, but let's begin with the test.
test ('User information is not distributed to a project where the user is not a member', async () => {
// Write in 'userInfo' -> should NOT turn up in project 1.
//
await collection("userInfo").doc("xyz").set({ displayName: "blah", photoURL: "https://no-such.png" });
// (firebase-jest-testing 0.0.3-beta.3)
await expect( eventually("projects/1/userInfo/xyz", o => !!o, 800 /*ms*/) ).resolves.toBeUndefined();
// ideally:
//await expect(prom).not.toComplete; // ..but with cancelling such a promise
}, 9999 /*ms*/ );
The eventually returns a Promise and I'd like to check that:
within the test's normal timeout...
such a Promise does not complete (resolve or reject)
Jest provides .resolves and .rejects but nothing that would combine the two.
Can I create the anticipated .not.toComplete using some Jest extension mechanism?
Can I create a "run just before the test would time out" (with ability to make the test pass or fail) trigger?
I think the 2. suggestion might turn handy, and can create a feature request for such, but let's see what comments this gets..
Edit: There's a further complexity in that JS Promises cannot be cancelled from outside (but they can time out, from within).
I eventually solved this with a custom matcher:
/*
* test-fns/matchers/timesOut.js
*
* Usage:
* <<
* expect(prom).timesOut(500);
* <<
*/
import { expect } from '#jest/globals'
expect.extend({
async timesOut(prom, ms) { // (Promise of any, number) => { message: () => string, pass: boolean }
// Wait for either 'prom' to complete, or a timeout.
//
const [resolved,error] = await Promise.race([ prom, timeoutMs(ms) ])
.then(x => [x])
.catch(err => [undefined,err] );
const pass = (resolved === TIMED_OUT);
return pass ? {
message: () => `expected not to time out in ${ms}ms`,
pass: true
} : {
message: () => `expected to time out in ${ms}ms, but ${ error ? `rejected with ${error}`:`resolved with ${resolved}` }`,
pass: false
}
}
})
const timeoutMs = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); })
.then( _ => TIMED_OUT);
const TIMED_OUT = Symbol()
source
The good side is, this can be added to any Jest project.
The down side is, one needs to separately mention the delay (and guarantee Jest's time out does not happen before).
Makes the question's code become:
await expect( eventually("projects/1/userInfo/xyz") ).timesOut(300)
Note for Firebase users:
Jest does not exit to OS level if Firestore JS SDK client listeners are still active. You can prevent it by unsubscribing to them in afterAll - but this means keeping track of which listeners are alive and which not. The firebase-jest-testing library does this for you, under the hood. Also, this will eventually ;) get fixed by Firebase.
Background
I use 3 back-end servers to provide fault tolerance for one of my online SaaS application. All important API calls, such as getting user data, contact all 3 servers and use value of first successfully resolved response, if any.
export function getSuccessValueOrThrow$<T>(
observables$: Observable<T>[],
tryUntilMillies = 30000,
): Observable<T> {
return race(
...observables$.map(observable$ => {
return observable$.pipe(
timeout(tryUntilMillies),
catchError(err => {
return of(err).pipe(delay(5000), mergeMap(_err => throwError(_err)));
}),
);
})
);
}
getSuccessValueOrThrow$ get called as following:
const shuffledApiDomainList = ['server1-domain', 'server2-domain', 'server3-domain';
const sessionInfo = await RequestUtils.getSuccessValueOrThrow(
...(shuffledApiDomainList.map(shuffledDomain => this.http.get<SessionDetails>(`${shuffledDomain}/file/converter/comm/session/info`))),
).toPromise();
Note: if one request resolve faster than others, usually the case, race rxjs function will cancel the other two requests. On Chrome dev network tab it will look like bellow where first request sent out was cancelled due to being too slow.
Question:
I use /file/converter/comm/session/info (lets call it Endpoint 1) to get some data related to a user. This request dispatched to all 3 back-end servers. If one resolve, then remaining 2 request will be cancelled, i.e. they will return null.
On my Cypress E2E test I have
cy.route('GET', '/file/converter/comm/session/info').as('getSessionInfo');
cy.visit('https://www.ps2pdf.com/compress-mp4');
cy.wait('#getSessionInfo').its('status').should('eq', 200)
This sometimes fails if the since getSessionInfo alias was hooked on to a request that ultimately get cancelled by getSuccessValueOrThrow$ because it wasn't the request that succeeded.Bellow image shows how 1 out of 3 request aliased with getSessionInfo succeeded but the test failed since the first request failed.
In Cypress, how do I wait for a successful i.e. status = 200 request?
Approach 1
Use .should() callback and repeat the cy.wait call if status was not 200:
function waitFor200(routeAlias, retries = 2) {
cy.wait(routeAlias).then(xhr => {
if (xhr.status === 200) return // OK
else if (retries > 0) waitFor200(routeAlias, retries - 1); // wait for the next response
else throw "All requests returned non-200 response";
})
}
// Usage example.
// Note that no assertions are chained here,
// the check has been performed inside this function already.
waitFor200('#getSessionInfo');
// Proceed with your test
cy.get('button').click(); // ...
Approach 2
Revise what it is that you want to test in the first place.
Chances are - there is something on the page that tells the user about a successful operation. E.g. show/hide a spinner or a progress bar, or just that the page content is updated to show new data fetched from the backend.
So in this approach you would remove cy.wait() altogether, and focus on what the user sees on the page - do some assertions on the actual page content.
cy.wait() yields an object containing the HTTP request and response properties of the XHR. The error you're getting is because you're looking for property status in the XHR object, but it is a property of the Response Object. You first have to get to the Response Object:
cy.wait('#getSessionInfo').should(xhr => {
expect(xhr.response).to.have.property('status', 200);
});
Edit: Since our backend uses graphql, all calls use the single /graphql endpoint. So I had to come up with a solution to differentiate each call.
I did that by using the onResponse() method of cy.route() and accumulating the data in Cypress environment object:
cy.route({
method: 'GET',
url: '/file/converter/comm/session/info',
onResponse(xhr) {
if (xhr.status === 200) {
Cypress.env('sessionInfo200') = xhr;
}
}
})
You can then use it like this:
cy.wrap(Cypress.env()).should('have.property', 'sessionInfo200');
I wait like this:
const isOk = cy.wait("#getSessionInfo").then((xhr) => {
return (xhr.status === 200);
});
The following example times out in most cases (outputs timed out):
Promise = require('bluebird');
new Promise(resolve => {
setTimeout(resolve, 1000);
})
.timeout(1001)
.then(() => {
console.log('finished');
})
.catch(error => {
if (error instanceof Promise.TimeoutError) {
console.log('timed out');
} else {
console.log('other error');
}
});
Does this mean that the Bluebird's promise overhead takes longer than 1ms?
I see it often time out even if I use .timeout(1002).
The main reason for asking - I'm trying to figure what the safe threshold is, which gets more important with smaller timeouts.
Using Bluebird 3.5.0, under Node.js 8.1.2
I have traced your bug in Bluebird's code. Consider this:
const p = new Promise(resolve => setTimeout(resolve, 1000));
const q = p.timeout(1001); // Bluebird spawns setTimeout(fn, 1001) deep inside
That looks rather innocent, yeah? Though, not in this case. Internally, Bluebird implemented it something like (not actually valid JS; timeout clearing logic is omitted):
Promise.prototype.timeout = function(ms) {
const original = this;
let result = original.then(); // Looks like noop
setTimeout(() => {
if result.isPending() {
make result rejected with TimeoutError; // Pseudocode
}
}, ms);
return result;
}
Bug was presence of line ret.isPending(). It resulted in brief time when original.isPending() === false and ret.isPending() === true because "resolved" status didn't propagate yet from original to children. Your code hit that extremely short period and BOOM, you had race condition.
I think what's going on here is that there's a race between the time the rest of the promise chain takes and the timer from the .timeout(). Since they are both so close in timing, sometimes one wins and sometimes the other wins - they are racy. When I run this code that logs the sequence of events, I get different ordering on different runs. The exact output order is unpredictable (e.g. racy).
const Promise = require('bluebird');
let buffer = [];
function log(x) {
buffer.push(x);
}
new Promise(resolve => {
setTimeout(() => {
log("hit my timeout");
resolve();
}, 1000);
}).timeout(1001).then(() => {
log('finished');
}).catch(error => {
if (error instanceof Promise.TimeoutError) {
log('timed out');
} else {
log('other error');
}
});
setTimeout(() => {
console.log(buffer.join("\n"));
}, 2000);
Sometimes this outputs:
hit my timeout
finished
And, sometimes it outputs:
hit my timeout
timed out
As has been mentioned in the comments, if .then() was always executed via microtask (which should precede any macrotasks), then one would think that the .then() would precede the setTimeout() from the .timeout(), but things are apparently not that simple.
Since the details of promise .then() scheduling is not mandated by specification (only that the stack is clear of application code) a code design should not assume a specific scheduling algorithm. Thus, a timeout this close to the execution of the async operation it is following can be racy and thus unpredictable.
If you could explain exactly what problem you're trying to solve, we could probably offer more concrete advice about what to do. No timers in Javascript are precise to the ms because JS is single threaded and all timer events have to go through the event queue and they only call their callbacks when their event gets serviced (not exactly when the timer fired). That said, timer events will always be served in order so a setTimeout(..., 1000) will always come before setTimeout(..., 1001), even though there may not be exactly 1ms delta between the executing of the two callbacks.
I have a buggy Web service that sporadically sends a 500-error "XMLHttpRequest cannot load http://54.175.3.41:3124/solve. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://local.xxx.me:8080' is therefore not allowed access. The response had HTTP status code 500."
I use Bacon.retry to wrap the ajax call. When it fails, it'll just retry. However, what I notice is that the stream won't produce a value when the server fails. It's as if Bacon.retry doesn't retry (which is in fact what's happening, when I look under the hood in the dev console).
I'm using BaconJS 0.7.65.
The observable Bacon.retry looks like this:
var ajaxRequest = Bacon.fromPromise($.ajax(//...));
var observable = Bacon.retry({
source: function() { return ajaxRequest; },
retries: 50,
delay: function() { return 100; }
});
The code that calls the observable looks like this:
stream.flatMap(function(valuesOrObservables) {
return Bacon.fromArray(valuesOrObservables)
.flatMapConcat(function(valueOrObservable) {
switch(valueOrObservable.type) { //we calculate this beforehand
case 'value' :
return valueOrObservable.value;
case 'observable' :
return Bacon.fromArray(valueOrObservable.observables)
.flatMapConcat(function(obs) { return obs; })
}
})
})
Observations:
if I add an error handler to the observable, it still does not work.
for some reason, #retry is called 50 times even when it succeeds.
I'm not sure entirely about Bacon but in RxJS Ajax calls are usually wrapped around AsyncSubjects so re subscribing to an error'd stream will just fire off the same error, you generally have to re-execute the method that produces the observable.
So something like retry would be (again sorry this is in Rx):
Rx.Observable.defer(() => callAjaxReturnObservable())
.retry(50)
.subscribe();
EDIT 1
Trying Baconize this and clarify my earlier answer:
var observable = Bacon.retry({
source : function() { return Bacon.fromPromise($.ajax(/**/)); },
retries : 50,
delay: function() { return 100; }
});
If you don't have the fromPromise inside of the source function, then every time you retry the downstream will just receive the same exception.