Expecting a Promise *not* to complete, in Jest - promise

I have the following need to test whether something does not happen.
While testing something like that may be worth a discussion (how long wait is long enough?), I hope there would exist a better way in Jest to integrate with test timeouts. So far, I haven't found one, but let's begin with the test.
test ('User information is not distributed to a project where the user is not a member', async () => {
// Write in 'userInfo' -> should NOT turn up in project 1.
//
await collection("userInfo").doc("xyz").set({ displayName: "blah", photoURL: "https://no-such.png" });
// (firebase-jest-testing 0.0.3-beta.3)
await expect( eventually("projects/1/userInfo/xyz", o => !!o, 800 /*ms*/) ).resolves.toBeUndefined();
// ideally:
//await expect(prom).not.toComplete; // ..but with cancelling such a promise
}, 9999 /*ms*/ );
The eventually returns a Promise and I'd like to check that:
within the test's normal timeout...
such a Promise does not complete (resolve or reject)
Jest provides .resolves and .rejects but nothing that would combine the two.
Can I create the anticipated .not.toComplete using some Jest extension mechanism?
Can I create a "run just before the test would time out" (with ability to make the test pass or fail) trigger?
I think the 2. suggestion might turn handy, and can create a feature request for such, but let's see what comments this gets..
Edit: There's a further complexity in that JS Promises cannot be cancelled from outside (but they can time out, from within).

I eventually solved this with a custom matcher:
/*
* test-fns/matchers/timesOut.js
*
* Usage:
* <<
* expect(prom).timesOut(500);
* <<
*/
import { expect } from '#jest/globals'
expect.extend({
async timesOut(prom, ms) { // (Promise of any, number) => { message: () => string, pass: boolean }
// Wait for either 'prom' to complete, or a timeout.
//
const [resolved,error] = await Promise.race([ prom, timeoutMs(ms) ])
.then(x => [x])
.catch(err => [undefined,err] );
const pass = (resolved === TIMED_OUT);
return pass ? {
message: () => `expected not to time out in ${ms}ms`,
pass: true
} : {
message: () => `expected to time out in ${ms}ms, but ${ error ? `rejected with ${error}`:`resolved with ${resolved}` }`,
pass: false
}
}
})
const timeoutMs = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); })
.then( _ => TIMED_OUT);
const TIMED_OUT = Symbol()
source
The good side is, this can be added to any Jest project.
The down side is, one needs to separately mention the delay (and guarantee Jest's time out does not happen before).
Makes the question's code become:
await expect( eventually("projects/1/userInfo/xyz") ).timesOut(300)
Note for Firebase users:
Jest does not exit to OS level if Firestore JS SDK client listeners are still active. You can prevent it by unsubscribing to them in afterAll - but this means keeping track of which listeners are alive and which not. The firebase-jest-testing library does this for you, under the hood. Also, this will eventually ;) get fixed by Firebase.

Related

IndexedDB breaks in Firefox after trying to save autoIncremented Blob

I am trying to implement Blob storage via IndexedDB for long Media recordings.
My code works fine in Chrome and Edge (not tested in Safari yet) - but won't do anything in Firefox. There are no errors, it just doesn't try to fulfill my requests past the initial DB Connection (which is successful). Intuitively, it seems that the processing is blocked by something. But I don't have anything in my code which would be blocking.
Simplified version of the code (without heavy logging and excessive error checks which I have added trying to debug):
const dbName = 'recording'
const storeValue = 'blobs'
let connection = null
const handler = window.indexedDB || window.mozIndexedDB || window.webkitIndexedDB
function connect() {
return new Promise((resolve, reject) => {
const request = handler.open(dbName)
request.onupgradeneeded = (event) => {
const db = event.target.result
if (db.objectStoreNames.contains(storeValue)) {
db.deleteObjectStore(storeValue)
}
db.createObjectStore(storeValue, {
keyPath: 'id',
autoIncrement: true,
})
}
request.onerror = () => {
reject()
}
request.onsuccess = () => {
connection = request.result
connection.onerror = () => {
connection = null
}
connection.onclose = () => {
connection = null
}
resolve()
}
})
}
async function saveChunk(chunk) {
if (!connection) await connect()
return new Promise((resolve, reject) => {
const store = connection.transaction(
storeValue,
'readwrite'
).objectStore(storeValue)
const req = store.add(chunk)
req.onsuccess = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
resolve(req.result)
}
req.onerror = () => {
reject()
}
req.transaction.oncomplete = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
}
})
}
// ... on blob available
await saveChunk(blob)
What I tried so far:
close any other other browser windows, anything that could count as on "open connection" that might be blocking execution
refresh Firefox profile
let my colleague test the code on his own machine => same result
Additional information that might useful:
Running in Nuxt 2.15.8 dev environment (localhost:3000). Code is used in the component as a Mixin. The project is rather large and uses a bunch of different browser APIs. There might be some kind of collision ?! This is the only place where we use IndexedDB, though, so to get to the bottom of this without any errors being thrown seems almost impossible.
Edit:
When I create a brand new Database, there is a brief window in which Transactions complete fine, but after some time has passed/something triggered, it goes back to being queued indefinitely.
I found out this morning when I had this structure:
...
clearDatabase() {
// get the store
const req = store.clear()
req.transaction.oncomplete = () => console.log('all good!')
}
await this.connect()
await this.clearDatabase()
'All good' fired. But any subsequent requests were broken same as before.
On page reload, even the clearDatabase request was broken again.
Something breaks with ongoing usage.
Edit2:
It's clearly connected to saving a Blob instance without an id with the autoIncrement option. Not only does it fail silently, it basically completely corrupts the DB. If I manually assign an incrementing ID to a Blob object, it works! If I leave out the id field for a regular simple object, it also works! Anyone knows about this? I feel like saving blobs is a common use-case so this should have been found already?!
I've concluded, unless proven otherwise, that it's a Firefox bug and opened a ticket on Bugzilla.
This happens with Blobs but might also be true for other instances. If you find yourself in the same situation there is a workaround. Don't rely on autoIncrement and assign IDs manually before trying to save them to the DB.

Lambdas stop invoking after a period of time

Here's my setup:
A Python 3.6 lambda function, which I want to keep pre-warmed at a certain concurrency level (say, 10). The lambda's initialization is painful enough that I don't want to inflict this cost on visitors at random. I call these lambdas "workers"
A Node lambda function which runs every 5 minutes to try to pre-warm 10 instances. It uses the Event invocation type for 9 of them, and RequestResponse for 1. There's only either one or zero of this lambda running at any one time. I call this a "warmer".
I followed the guidelines at [https://www.jeremydaly.com/lambda-warmer-optimize-aws-lambda-function-cold-starts/], namely:
Don’t ping more often than every 5 minutes
Invoke the function directly (i.e. don’t use API Gateway to invoke it)
Pass in a test payload that can be identified as such
Create handler logic that replies accordingly without running the whole function
Here's a problem: this works great for several minutes. Then, as I watch the logs, I start to get timeouts from my worker lambda invocations. The timeouts quickly take over all the invocations that the warmer is trying to launch.
Now, no worker lambdas are prewarmed any more. But the warmer keeps on trying, on a Cloudwatch event cron schedule, suffering 100% timeouts. Finally, Lambda stops trying to launch my worker lambdas at all. It feels like some aspect of Lambda's getting its state scrambled. The only way to recover is to re-deploy the lambda. That buys me another hour with pre-warmed lambdas working.
Questions:
How do I get visibility into why my worker lambdas start timing out, and then become completely non-responsive?
What is the definition of a "Concurrent Execution"? On the main Lambda dashboard it shows me this chart of them. Yet, it seems to have more than twice as many Concurrent Executions as I'm requesting.
Here's the warmup lambda code (Node):
// warmer
"use strict";
/** Generated by Serverless WarmUP Plugin at ${new Date().toISOString()} */
const aws = require("aws-sdk");
aws.config.region = "${this.options.region}";
const lambda = new aws.Lambda({httpOptions: {timeout: 60000}});
const functionNames = ${JSON.stringify(functionNames)};
const delay = ms => new Promise(res => setTimeout(res, ms))
const concurrency = 10;
module.exports.warmUp = async (event, context, callback) => {
console.log("Warm Up Start");
const invokes = await Promise.all(functionNames.map(async (functionName) => {
let invocations = [];
try {
for(let i=1;i <= concurrency;i++){
let params = {
FunctionName: functionName,
InvocationType: (i===concurrency)?'RequestResponse': 'Event',
LogType: 'None',
Qualifier: process.env.SERVERLESS_ALIAS || "$LATEST",
Payload: JSON.stringify({
source: 'serverless-plugin-warmup',
'__WARMER_INVOCATION__': i,
'__WARMER_CONCURRENCY__': concurrency,
'__WARMER_REQUESTED__': new Date().toISOString(),
})
};
invocations.push(lambda.invoke(params).promise())
}
return await delay(75).then(Promise.all(invocations.map(p => p.catch(e => e)))
.then(results => console.log('results', results))
.catch(e => {
console.log(e);
return e;
}
))
} catch (e) {
console.log(\`Warm Up Invoke Error: \${functionName}\`, e);
return false;
}
}));
console.log(\`Warm Up Finished\`);
}
And here's the worker lambda (Python):
source = event.get('source')
if source == 'serverless-plugin-warmup':
time.sleep(0.05)
print(event)
return lambda_gateway_response(200, {"status": "lambda warmup"})
It was the warmer (Node) lambda going haywire, even though all the logs pointed at the worker (Python) lambdas. After setting context.callbackWaitsForEmptyEventLoop = false, the problem disappeared.

Using asynchronous Nightwatch After Hook with client interaction does not work

As far as I can tell, using promises or callbacks in After hook prevents Command Queue from executing when using promises / callbacks. I'm trying to figure out why, any help or suggestions are appreciated. Closest issue I could find on github is: https://github.com/nightwatchjs/nightwatch/issues/341
which states: finding that trying to make browser calls in the after hook is too late; it appears that the session is closed before after is run. (exactly my problem). But there is no solution provided. I need to run cleanup steps after my scenarios run, and those cleanup steps need to be able to interact with browser.
https://github.com/nightwatchjs/nightwatch/wiki/Understanding-the-Command-Queue
In the snippet below, bar is never outputted. Just foo.
const { After } = require('cucumber');
const { client } = require('nightwatch-cucumber');
After(() => new Promise((resolve) => {
console.log('foo')
client.perform(() => {
console.log('bar')
});
}));
I also tried using callback approach
After((browser, done) => {
console.log('foo');
client.perform(() => {
console.log('bar');
done();
});
});
But similar to 1st example, bar is never outputted, just foo
You can instead use something like:
const moreWork = async () => {
console.log('bar');
await new Promise((resolve) => {
setTimeout(resolve, 10000);
})
}
After(() => client.perform(async () => {
console.log('foo');
moreWork();
}));
But the asynchronous nature of moreWork means that the client terminates before my work is finished, so this isn't really workin for me. You can't use an await in the perform since they are in different execution contexts.
Basically the only way to get client commands to execute in after hook is my third example, but it prevents me from using async.
The 1st and 2nd examples would be great if the command queue didn't freeze and prevent execution.
edit: I'm finding more issues on github that state the browser is not available in before / after hooks: https://github.com/nightwatchjs/nightwatch/issues/575
What are you supposed to do if you want to clean up using the browser after all features have run?
Try the following
After(async () => {
await client.perform(() => {
...
});
await moreWork();
})

How fast / efficient is Bluebird's timeout?

The following example times out in most cases (outputs timed out):
Promise = require('bluebird');
new Promise(resolve => {
setTimeout(resolve, 1000);
})
.timeout(1001)
.then(() => {
console.log('finished');
})
.catch(error => {
if (error instanceof Promise.TimeoutError) {
console.log('timed out');
} else {
console.log('other error');
}
});
Does this mean that the Bluebird's promise overhead takes longer than 1ms?
I see it often time out even if I use .timeout(1002).
The main reason for asking - I'm trying to figure what the safe threshold is, which gets more important with smaller timeouts.
Using Bluebird 3.5.0, under Node.js 8.1.2
I have traced your bug in Bluebird's code. Consider this:
const p = new Promise(resolve => setTimeout(resolve, 1000));
const q = p.timeout(1001); // Bluebird spawns setTimeout(fn, 1001) deep inside
That looks rather innocent, yeah? Though, not in this case. Internally, Bluebird implemented it something like (not actually valid JS; timeout clearing logic is omitted):
Promise.prototype.timeout = function(ms) {
const original = this;
let result = original.then(); // Looks like noop
setTimeout(() => {
if result.isPending() {
make result rejected with TimeoutError; // Pseudocode
}
}, ms);
return result;
}
Bug was presence of line ret.isPending(). It resulted in brief time when original.isPending() === false and ret.isPending() === true because "resolved" status didn't propagate yet from original to children. Your code hit that extremely short period and BOOM, you had race condition.
I think what's going on here is that there's a race between the time the rest of the promise chain takes and the timer from the .timeout(). Since they are both so close in timing, sometimes one wins and sometimes the other wins - they are racy. When I run this code that logs the sequence of events, I get different ordering on different runs. The exact output order is unpredictable (e.g. racy).
const Promise = require('bluebird');
let buffer = [];
function log(x) {
buffer.push(x);
}
new Promise(resolve => {
setTimeout(() => {
log("hit my timeout");
resolve();
}, 1000);
}).timeout(1001).then(() => {
log('finished');
}).catch(error => {
if (error instanceof Promise.TimeoutError) {
log('timed out');
} else {
log('other error');
}
});
setTimeout(() => {
console.log(buffer.join("\n"));
}, 2000);
Sometimes this outputs:
hit my timeout
finished
And, sometimes it outputs:
hit my timeout
timed out
As has been mentioned in the comments, if .then() was always executed via microtask (which should precede any macrotasks), then one would think that the .then() would precede the setTimeout() from the .timeout(), but things are apparently not that simple.
Since the details of promise .then() scheduling is not mandated by specification (only that the stack is clear of application code) a code design should not assume a specific scheduling algorithm. Thus, a timeout this close to the execution of the async operation it is following can be racy and thus unpredictable.
If you could explain exactly what problem you're trying to solve, we could probably offer more concrete advice about what to do. No timers in Javascript are precise to the ms because JS is single threaded and all timer events have to go through the event queue and they only call their callbacks when their event gets serviced (not exactly when the timer fired). That said, timer events will always be served in order so a setTimeout(..., 1000) will always come before setTimeout(..., 1001), even though there may not be exactly 1ms delta between the executing of the two callbacks.

How to Mock and test using an RxJS subject?

I have some functions that accept an RxJS subject (backed to a socket) that I want to test. I'd like to mock the subject in a very request reply fashion. Since I'm unsure of a clean Rx way to do this, I'm tempted to use an EventEmitter to form my fake socket.
Generally, I want to:
check that the message received on my "socket" matches expectations
respond to that message on the same subject: observer.next(resp)
I do need to be able to use data from the message to form the response as well.
The code being tested is
export function acquireKernelInfo(sock) {
// set up our JSON payload
const message = createMessage('kernel_info_request');
const obs = shell
.childOf(message)
.ofMessageType('kernel_info_reply')
.first()
.pluck('content', 'language_info')
.map(setLanguageInfo)
.publishReplay(1)
.refCount();
sock.next(message);
return obs;
}
You could manually create two subjects and "glue them together" as one Subject with Subject.create:
const sent = new Rx.Subject();
const received = new Rx.Subject();
const mockWebSocketSubject = Subject.create(sent, received)
const s1 = sent.subscribe(
(msg) => sentMsgs.push({ next: msg }),
(err) => sentMsgs.push({ error: err }),
() => sendMsgs.push({ complete: true })
);
const s2 = recieved.subscribe(
(msg) => sentMsgs.push({ next: msg }),
(err) => sentMsgs.push({ error: err }),
() => sendMsgs.push({ complete: true })
);
// to send a message
// (presumably whatever system you're injecting this into is doing the sending)
sent.next('weee');
// to mock a received message
received.next('blarg');
s1.unsubscribe();
s2.unsubscribe();
That said, it's really a matter of what you're testing, how it's structured, and what the API is.
Ideally you'd be able to run your whole test synchronously. If you can't for some Rx-related reason, you should look into the TestScheduler, which has facilities to run tests in virtualized time.

Resources