The following example times out in most cases (outputs timed out):
Promise = require('bluebird');
new Promise(resolve => {
setTimeout(resolve, 1000);
})
.timeout(1001)
.then(() => {
console.log('finished');
})
.catch(error => {
if (error instanceof Promise.TimeoutError) {
console.log('timed out');
} else {
console.log('other error');
}
});
Does this mean that the Bluebird's promise overhead takes longer than 1ms?
I see it often time out even if I use .timeout(1002).
The main reason for asking - I'm trying to figure what the safe threshold is, which gets more important with smaller timeouts.
Using Bluebird 3.5.0, under Node.js 8.1.2
I have traced your bug in Bluebird's code. Consider this:
const p = new Promise(resolve => setTimeout(resolve, 1000));
const q = p.timeout(1001); // Bluebird spawns setTimeout(fn, 1001) deep inside
That looks rather innocent, yeah? Though, not in this case. Internally, Bluebird implemented it something like (not actually valid JS; timeout clearing logic is omitted):
Promise.prototype.timeout = function(ms) {
const original = this;
let result = original.then(); // Looks like noop
setTimeout(() => {
if result.isPending() {
make result rejected with TimeoutError; // Pseudocode
}
}, ms);
return result;
}
Bug was presence of line ret.isPending(). It resulted in brief time when original.isPending() === false and ret.isPending() === true because "resolved" status didn't propagate yet from original to children. Your code hit that extremely short period and BOOM, you had race condition.
I think what's going on here is that there's a race between the time the rest of the promise chain takes and the timer from the .timeout(). Since they are both so close in timing, sometimes one wins and sometimes the other wins - they are racy. When I run this code that logs the sequence of events, I get different ordering on different runs. The exact output order is unpredictable (e.g. racy).
const Promise = require('bluebird');
let buffer = [];
function log(x) {
buffer.push(x);
}
new Promise(resolve => {
setTimeout(() => {
log("hit my timeout");
resolve();
}, 1000);
}).timeout(1001).then(() => {
log('finished');
}).catch(error => {
if (error instanceof Promise.TimeoutError) {
log('timed out');
} else {
log('other error');
}
});
setTimeout(() => {
console.log(buffer.join("\n"));
}, 2000);
Sometimes this outputs:
hit my timeout
finished
And, sometimes it outputs:
hit my timeout
timed out
As has been mentioned in the comments, if .then() was always executed via microtask (which should precede any macrotasks), then one would think that the .then() would precede the setTimeout() from the .timeout(), but things are apparently not that simple.
Since the details of promise .then() scheduling is not mandated by specification (only that the stack is clear of application code) a code design should not assume a specific scheduling algorithm. Thus, a timeout this close to the execution of the async operation it is following can be racy and thus unpredictable.
If you could explain exactly what problem you're trying to solve, we could probably offer more concrete advice about what to do. No timers in Javascript are precise to the ms because JS is single threaded and all timer events have to go through the event queue and they only call their callbacks when their event gets serviced (not exactly when the timer fired). That said, timer events will always be served in order so a setTimeout(..., 1000) will always come before setTimeout(..., 1001), even though there may not be exactly 1ms delta between the executing of the two callbacks.
Related
I have the following need to test whether something does not happen.
While testing something like that may be worth a discussion (how long wait is long enough?), I hope there would exist a better way in Jest to integrate with test timeouts. So far, I haven't found one, but let's begin with the test.
test ('User information is not distributed to a project where the user is not a member', async () => {
// Write in 'userInfo' -> should NOT turn up in project 1.
//
await collection("userInfo").doc("xyz").set({ displayName: "blah", photoURL: "https://no-such.png" });
// (firebase-jest-testing 0.0.3-beta.3)
await expect( eventually("projects/1/userInfo/xyz", o => !!o, 800 /*ms*/) ).resolves.toBeUndefined();
// ideally:
//await expect(prom).not.toComplete; // ..but with cancelling such a promise
}, 9999 /*ms*/ );
The eventually returns a Promise and I'd like to check that:
within the test's normal timeout...
such a Promise does not complete (resolve or reject)
Jest provides .resolves and .rejects but nothing that would combine the two.
Can I create the anticipated .not.toComplete using some Jest extension mechanism?
Can I create a "run just before the test would time out" (with ability to make the test pass or fail) trigger?
I think the 2. suggestion might turn handy, and can create a feature request for such, but let's see what comments this gets..
Edit: There's a further complexity in that JS Promises cannot be cancelled from outside (but they can time out, from within).
I eventually solved this with a custom matcher:
/*
* test-fns/matchers/timesOut.js
*
* Usage:
* <<
* expect(prom).timesOut(500);
* <<
*/
import { expect } from '#jest/globals'
expect.extend({
async timesOut(prom, ms) { // (Promise of any, number) => { message: () => string, pass: boolean }
// Wait for either 'prom' to complete, or a timeout.
//
const [resolved,error] = await Promise.race([ prom, timeoutMs(ms) ])
.then(x => [x])
.catch(err => [undefined,err] );
const pass = (resolved === TIMED_OUT);
return pass ? {
message: () => `expected not to time out in ${ms}ms`,
pass: true
} : {
message: () => `expected to time out in ${ms}ms, but ${ error ? `rejected with ${error}`:`resolved with ${resolved}` }`,
pass: false
}
}
})
const timeoutMs = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); })
.then( _ => TIMED_OUT);
const TIMED_OUT = Symbol()
source
The good side is, this can be added to any Jest project.
The down side is, one needs to separately mention the delay (and guarantee Jest's time out does not happen before).
Makes the question's code become:
await expect( eventually("projects/1/userInfo/xyz") ).timesOut(300)
Note for Firebase users:
Jest does not exit to OS level if Firestore JS SDK client listeners are still active. You can prevent it by unsubscribing to them in afterAll - but this means keeping track of which listeners are alive and which not. The firebase-jest-testing library does this for you, under the hood. Also, this will eventually ;) get fixed by Firebase.
I am new to rxjs (the project is stuck on rxjs 5 for the time being) and I don't really need to refactor sthese functions but as I'm trying to get up to speed with rxjs, how would I not use toPromise for an http call:
try {
const response = await this.http.post(`${TokenUrl}`, payload, options).toPromise();
} catch(err) {
// whatever
}
I also have this setInterval that periodically pings a server:
this.timerId = setInterval(() => {
this.blobStorageService.ping();
}, 60000);
I tried using interal but could not quite get the syntax right.
Here's my setup:
A Python 3.6 lambda function, which I want to keep pre-warmed at a certain concurrency level (say, 10). The lambda's initialization is painful enough that I don't want to inflict this cost on visitors at random. I call these lambdas "workers"
A Node lambda function which runs every 5 minutes to try to pre-warm 10 instances. It uses the Event invocation type for 9 of them, and RequestResponse for 1. There's only either one or zero of this lambda running at any one time. I call this a "warmer".
I followed the guidelines at [https://www.jeremydaly.com/lambda-warmer-optimize-aws-lambda-function-cold-starts/], namely:
Don’t ping more often than every 5 minutes
Invoke the function directly (i.e. don’t use API Gateway to invoke it)
Pass in a test payload that can be identified as such
Create handler logic that replies accordingly without running the whole function
Here's a problem: this works great for several minutes. Then, as I watch the logs, I start to get timeouts from my worker lambda invocations. The timeouts quickly take over all the invocations that the warmer is trying to launch.
Now, no worker lambdas are prewarmed any more. But the warmer keeps on trying, on a Cloudwatch event cron schedule, suffering 100% timeouts. Finally, Lambda stops trying to launch my worker lambdas at all. It feels like some aspect of Lambda's getting its state scrambled. The only way to recover is to re-deploy the lambda. That buys me another hour with pre-warmed lambdas working.
Questions:
How do I get visibility into why my worker lambdas start timing out, and then become completely non-responsive?
What is the definition of a "Concurrent Execution"? On the main Lambda dashboard it shows me this chart of them. Yet, it seems to have more than twice as many Concurrent Executions as I'm requesting.
Here's the warmup lambda code (Node):
// warmer
"use strict";
/** Generated by Serverless WarmUP Plugin at ${new Date().toISOString()} */
const aws = require("aws-sdk");
aws.config.region = "${this.options.region}";
const lambda = new aws.Lambda({httpOptions: {timeout: 60000}});
const functionNames = ${JSON.stringify(functionNames)};
const delay = ms => new Promise(res => setTimeout(res, ms))
const concurrency = 10;
module.exports.warmUp = async (event, context, callback) => {
console.log("Warm Up Start");
const invokes = await Promise.all(functionNames.map(async (functionName) => {
let invocations = [];
try {
for(let i=1;i <= concurrency;i++){
let params = {
FunctionName: functionName,
InvocationType: (i===concurrency)?'RequestResponse': 'Event',
LogType: 'None',
Qualifier: process.env.SERVERLESS_ALIAS || "$LATEST",
Payload: JSON.stringify({
source: 'serverless-plugin-warmup',
'__WARMER_INVOCATION__': i,
'__WARMER_CONCURRENCY__': concurrency,
'__WARMER_REQUESTED__': new Date().toISOString(),
})
};
invocations.push(lambda.invoke(params).promise())
}
return await delay(75).then(Promise.all(invocations.map(p => p.catch(e => e)))
.then(results => console.log('results', results))
.catch(e => {
console.log(e);
return e;
}
))
} catch (e) {
console.log(\`Warm Up Invoke Error: \${functionName}\`, e);
return false;
}
}));
console.log(\`Warm Up Finished\`);
}
And here's the worker lambda (Python):
source = event.get('source')
if source == 'serverless-plugin-warmup':
time.sleep(0.05)
print(event)
return lambda_gateway_response(200, {"status": "lambda warmup"})
It was the warmer (Node) lambda going haywire, even though all the logs pointed at the worker (Python) lambdas. After setting context.callbackWaitsForEmptyEventLoop = false, the problem disappeared.
As far as I can tell, using promises or callbacks in After hook prevents Command Queue from executing when using promises / callbacks. I'm trying to figure out why, any help or suggestions are appreciated. Closest issue I could find on github is: https://github.com/nightwatchjs/nightwatch/issues/341
which states: finding that trying to make browser calls in the after hook is too late; it appears that the session is closed before after is run. (exactly my problem). But there is no solution provided. I need to run cleanup steps after my scenarios run, and those cleanup steps need to be able to interact with browser.
https://github.com/nightwatchjs/nightwatch/wiki/Understanding-the-Command-Queue
In the snippet below, bar is never outputted. Just foo.
const { After } = require('cucumber');
const { client } = require('nightwatch-cucumber');
After(() => new Promise((resolve) => {
console.log('foo')
client.perform(() => {
console.log('bar')
});
}));
I also tried using callback approach
After((browser, done) => {
console.log('foo');
client.perform(() => {
console.log('bar');
done();
});
});
But similar to 1st example, bar is never outputted, just foo
You can instead use something like:
const moreWork = async () => {
console.log('bar');
await new Promise((resolve) => {
setTimeout(resolve, 10000);
})
}
After(() => client.perform(async () => {
console.log('foo');
moreWork();
}));
But the asynchronous nature of moreWork means that the client terminates before my work is finished, so this isn't really workin for me. You can't use an await in the perform since they are in different execution contexts.
Basically the only way to get client commands to execute in after hook is my third example, but it prevents me from using async.
The 1st and 2nd examples would be great if the command queue didn't freeze and prevent execution.
edit: I'm finding more issues on github that state the browser is not available in before / after hooks: https://github.com/nightwatchjs/nightwatch/issues/575
What are you supposed to do if you want to clean up using the browser after all features have run?
Try the following
After(async () => {
await client.perform(() => {
...
});
await moreWork();
})
I have a redux-observable epic that polls an endpoint, getting progress updates until the progress is 100%. The polling interval is acheived using debounceTime like so:
function myEpic(action$, store, dependencies) {
return action$.ofType('PROCESSING')
.do(action => console.log(`RECEIVED ACTION: ${JSON.stringify(action)}`))
.debounceTime(1000, dependencies.scheduler)
.mergeMap(action => (
dependencies.ajax({ url: action.checkUrl })
.map((resp) => {
if (parseInt(resp.progress, 10) === 100) {
return { type: 'SUCCESS' };
}
return { checkUrl: resp.check_url, progress: resp.progress, type: 'PROCESSING' };
})));
}
This works fine but I'd like to write an integration test that tests the state of the store when progress is at 25%, then at 50%, then at 100%.
In my integration tests I can set dependencies.scheduler to be new VirtualTimeScheduler().
This is how I'm trying to do it at the moment (using jest):
describe('my integration test', () => {
const scheduler = new VirtualTimeScheduler();
beforeEach(() => {
// Fake ajax responses
const ajax = (request) => {
console.log(`FAKING REQUEST FOR URL: ${request.url}`);
if (request.url === '/check_url_1') {
return Observable.of({ progress: 25, check_url: '/check_url_2' });
} else if (request.url === '/check_url_2') {
return Observable.of({ progress: 50, check_url: '/check_url_3' });
} else if (request.url === '/check_url_3') {
return Observable.of({ progress: 100 });
}
return null;
};
store = configureStore(defaultState, { ajax, scheduler });
});
it('should update the store properly after each call', () => {
store.dispatch({ checkUrl: '/check_url_1', progress: 0, type: 'PROCESSING' });
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 25');
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 50');
scheduler.flush();
console.log('CHECK CORRECT STATE FOR PROGRESS 100');
});
});
My expected output would be:
RECEIVED ACTION: {"checkUrl":"/check_url_1","progress":0,"type":"PROCESSING"}
FAKING REQUEST FOR URL: /check_url_1
CHECK CORRECT STATE FOR PROGRESS 25
RECEIVED ACTION: {"checkUrl":"/check_url_2","progress":25,"type":"PROCESSING"}
FAKING REQUEST FOR URL: /check_url_2
CHECK CORRECT STATE FOR PROGRESS 50
RECEIVED ACTION: {"checkUrl":"/check_url_3","progress":50,"type":"PROCESSING"}
# CHECK CORRECT STATE FOR PROGRESS 100
But instead the output I get is
RECEIVED ACTION: {"checkUrl":"/check_url_1","progress":0,"type":"PROCESSING","errors":null}
FAKING REQUEST FOR URL: /check_url_1
RECEIVED ACTION: {"checkUrl":"/check_url_2","progress":25,"type":"PROCESSING","errors":null}
CHECK CORRECT STATE FOR PROGRESS 25%
CHECK CORRECT STATE FOR PROGRESS 50%
CHECK CORRECT STATE FOR PROGRESS 100%
At which time the test finishes. I'm configuring the store so that I can mock ajax requests and the scheduler used for debounceTime like as recommended here
So my question is how can I test the state of my store after each of the three ajax requests?
Interestingly enough, I played around with your code and am fairly confident you just found a bug in the debounceTime operator, which causes the apparent swallowing the scheduled debounce. The bad news is that even if that bug is fixed, you're code still wouldn't do what you're looking for order wise.
Bear with me as shit is about to get real:
Epic receives action PROCESSING and schedules debounce, yielding execution to your test
Your test calls scheduler.flush() and the VirtualScheduler executes the scheduled debounce work, which will pass along the original PROCESSING action to the mergeMap
Fake ajax is made, which synchronously emits a response
Response is mapped to the second PROCESSING action
Your epic emits that second action synchronously
The second action is recursively received by your epic and given to the debounce
The debounceTime operator now schedules that second action on the VirtualScheduler but the debounceTime operator is in the middle of executing the previously scheduled work still from the first action.
The call stack unwinds a bunch up until it reaches inside the previously scheduled debounce work from the first action that had just next()'d the first action. The rxjs code for debounceTime then sets this.lastValue = null and this.hasValue = false This is the rxjs bug, it needs to be done before nexting into the destination
The stack unwinds some more to the running flush() method of the VirtualScheduler, which now dequeues the second scheduled debounced action because it was added the scheduled work array synchronously, before this the flushing finished. Remember, we've only called scheduler.flush() ONCE so far, which is the function we're in back in at this point.
The second scheduled debounce work is run, but this.hasValue === false because the first scheduled one set it, so the debounceTime operator does not emit anything.
Stack unwinds to our first scheduler.flush()
We console.log('CHECK CORRECT STATE FOR PROGRESS 25')
All the other scheduler.flush() calls do nothing as there's nothing scheduled.
This is technically a bug, but it's not surprising that no one has run into it since running debounce synchronously without any delay defeats the point of it, except when you're testing, of course. This ticket is basically the same thing and OJ says RxJS doesn't make reentrancy guarantees, but I that might be up for debate in this case. I've filed a PR with the fix to discuss
Remember, this bug wouldn't have solved your underlying question about the ordering, but would have prevented the actions from being swallowed.
Off the top of my head I'm not sure how you would do what you'd like to do specifically if you want to maintain 100% synchronous behavior (VirtualScheduler). You'd need some way of yielding to your test in between debounces. For me when and if I write integration tests I mock out very little, if anything. e.g. let the debounces actually debounce either naturally or by mocking out setTimeout to advance them quicker but still keeping them async which will yield back to your test allowing you to check the state, but making your test also async.
For anyone wanting to reproduce, here's the StackBlitz code I used
The answer was to re-write the test asynchronously. Also note-worthy is that I had to mock the ajax requests by returning an Observable.fromPromise rather than just a regular Observable.of, otherwise they would still get swallowed up by the debounce. Something along these lines (using jest):
describe('my integration test', () => {
const scheduler = new VirtualTimeScheduler();
beforeEach(() => {
// Fake ajax responses
const ajax = request => (
Observable.fromPromise(new Promise((resolve) => {
if (request.url === '/check_url_1') {
resolve({ response: { progress: 25, check_url: '/check_url_2' } });
} else if (request.url === '/check_url_2') {
resolve({ response: { progress: 50, check_url: '/check_url_3' } });
} else {
resolve({ response: { progress: 100 } });
}
}))
);
store = configureStore(defaultState, { ajax, timerInterval: 1 });
});
it('should update the store properly after each call', (done) => {
let i = 0;
store.subscribe(() => {
switch (i) {
case 0:
console.log('CHECK CORRECT STATE FOR PROGRESS 0');
break;
case 1:
console.log('CHECK CORRECT STATE FOR PROGRESS 25');
break;
case 2:
console.log('CHECK CORRECT STATE FOR PROGRESS 50');
break;
case 3:
console.log('CHECK CORRECT STATE FOR PROGRESS 100');
done();
break;
default:
}
i += 1;
});
store.dispatch({ checkUrl: '/check_url_1', progress: 0, type: 'PROCESSING' });
});
});
I also set the timer interval to 1 by passing it as a dependency. In my epic I set it like this: .debounceTime(dependencies.timerInterval || 1000)