Firebase Functions take 1 second to start any function - performance

I have a Firebase function. It makes 3 simple calls to Firebase Database.
I put some logs to profile a little, ran function 5 times to ensure it is not a cold start and here is what I get on 6th run:
8:10:34.133 am - Function execution started
8:10:34.133 am - Billing account not configured.....
8:10:35.284 am - MyFunction start and make 3 database calls
8:10:35.456 am - MyFunction database results obtained
8:10:35.461 am - MyFunction execution finished
So, it takes just 250ms to make database calls, but
It takes almost 1 second from the alleged function start to the execution of the first line.
My question is – is it really the case, which would make Firebase Functions unusable for a serverless API, or I am doing something wrong?
Function is an https trigger. Written with Express. Only CORS is applied.
Setup for functions:
const unsecure = express();
unsecure.get("/myFunc", require("./core/myFunc.f.js"))
MyFunc.f.js:
const functions = require("firebase-functions");
const admin = require("firebase-admin");
try {
admin.initializeApp(
Object.assign({}, functions.config().firebase, {
credential: admin.credential.cert(
),
storageBucket: ""
})
);
} catch (e) {
//can be initialized only once
}
const database = admin.database()
module.exports = (request, response) => {
console.log("/myFunc started and calling database")
Promise.all([
database.ref("database_node").child("default").once("value"),
database.ref("database_node").child("no_code").once("value"),
database.ref("database_node").child("test_code").once("value")
]).then(values => {
console.log("/myFunc got database results: " + JSON.stringify(values))
response.status(200).send({})
})
}

Related

Invoking AWS Step Function from Lambda Fails Silently in Serverless 3

I was able to start a state machine from a lambda in Serverless v2 using this technique:
const request = {
data: someDataGoesHere
};
const params = {
stateMachineArn: process.env.statemachine_arn,
input: JSON.stringify(request),
name: uniqueNameGoesHere,
};
const steps = new SFNClient({region: "us-east-1"});
const command = new StartSyncExecutionCommand(params);
console.log("Starting State Machine", params);
const result = await steps.send(command);
console.log("Back from State Machine", result);
After upgrading Serverless Framework to version 3, this code fails silently - the call to steps.send(command) never returns and the lambda times out (so "Back from State Machine" is never written to the lambda's log). An entry is not created in the CloudWatch logs for the step function, so there doesn't appear to be any way to figure out what went wrong. I have verified that stateMachineArn is set correctly.
I have tried removing and re-deploying the entire stack, but still can't start the step function. Any debugging advice would be appreciated!

Expecting a Promise *not* to complete, in Jest

I have the following need to test whether something does not happen.
While testing something like that may be worth a discussion (how long wait is long enough?), I hope there would exist a better way in Jest to integrate with test timeouts. So far, I haven't found one, but let's begin with the test.
test ('User information is not distributed to a project where the user is not a member', async () => {
// Write in 'userInfo' -> should NOT turn up in project 1.
//
await collection("userInfo").doc("xyz").set({ displayName: "blah", photoURL: "https://no-such.png" });
// (firebase-jest-testing 0.0.3-beta.3)
await expect( eventually("projects/1/userInfo/xyz", o => !!o, 800 /*ms*/) ).resolves.toBeUndefined();
// ideally:
//await expect(prom).not.toComplete; // ..but with cancelling such a promise
}, 9999 /*ms*/ );
The eventually returns a Promise and I'd like to check that:
within the test's normal timeout...
such a Promise does not complete (resolve or reject)
Jest provides .resolves and .rejects but nothing that would combine the two.
Can I create the anticipated .not.toComplete using some Jest extension mechanism?
Can I create a "run just before the test would time out" (with ability to make the test pass or fail) trigger?
I think the 2. suggestion might turn handy, and can create a feature request for such, but let's see what comments this gets..
Edit: There's a further complexity in that JS Promises cannot be cancelled from outside (but they can time out, from within).
I eventually solved this with a custom matcher:
/*
* test-fns/matchers/timesOut.js
*
* Usage:
* <<
* expect(prom).timesOut(500);
* <<
*/
import { expect } from '#jest/globals'
expect.extend({
async timesOut(prom, ms) { // (Promise of any, number) => { message: () => string, pass: boolean }
// Wait for either 'prom' to complete, or a timeout.
//
const [resolved,error] = await Promise.race([ prom, timeoutMs(ms) ])
.then(x => [x])
.catch(err => [undefined,err] );
const pass = (resolved === TIMED_OUT);
return pass ? {
message: () => `expected not to time out in ${ms}ms`,
pass: true
} : {
message: () => `expected to time out in ${ms}ms, but ${ error ? `rejected with ${error}`:`resolved with ${resolved}` }`,
pass: false
}
}
})
const timeoutMs = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); })
.then( _ => TIMED_OUT);
const TIMED_OUT = Symbol()
source
The good side is, this can be added to any Jest project.
The down side is, one needs to separately mention the delay (and guarantee Jest's time out does not happen before).
Makes the question's code become:
await expect( eventually("projects/1/userInfo/xyz") ).timesOut(300)
Note for Firebase users:
Jest does not exit to OS level if Firestore JS SDK client listeners are still active. You can prevent it by unsubscribing to them in afterAll - but this means keeping track of which listeners are alive and which not. The firebase-jest-testing library does this for you, under the hood. Also, this will eventually ;) get fixed by Firebase.

Apollo server subscription not recognizing Async Iterable

I'm having an issue with Apollo GraphQL's subscription. When attempting to start the subscription I'm getting this in return:
"Subscription field must return Async Iterable. Received: { pubsub: { ee: [EventEmitter], subscriptions: {}, subIdCounter: 0 }, pullQueue: [], pushQueue: [], running: true, allSubscribed: null, eventsArray: [\"H-f_mUvS\"], return: [function return] }"
I have other subscriptions setup and are completely functional - so I can confirm the webserver is setup correctly.
I'm just curious if anyone else has ever ran onto this issue before.
Source code in PR diff (it's an open source project):
https://github.com/astronomer/houston-api/pull/165/files
error in playground
I don't think this is an issue specific to the PR you posted. I'd be surprised if any of the subscriptions were working as is.
Your subscribe function should return an AsyncIterable, as the error states. Since it returns a call to createPoller, createPoller should return an AsyncIterable. But here's what that function looks like:
export default function createPoller(
func,
pubsub,
interval = 5000, // Poll every 5 seconds
timeout = 3600000 // Kill after 1 hour
) {
// Gernate a random internal topic.
const topic = shortid.generate();
// Create an async iterator. This is what a subscription resolver expects to be returned.
const iterator = pubsub.asyncIterator(topic);
// Wrap the publish function on the pubsub object, pre-populating the topic.
const publish = bind(curry(pubsub.publish, 2)(topic), pubsub);
// Call the function once to get initial dataset.
func(publish);
// Then set up a timer to call the passed function. This is the poller.
const poll = setInterval(partial(func, publish), interval);
// If we are passed a timeout, kill subscription after that interval has passed.
const kill = setTimeout(iterator.return, timeout);
// Create a typical async iterator, but overwrite the return function
// and cancel the timer. The return function gets called by the apollo server
// when a subscription is cancelled.
return {
...iterator,
return: () => {
log.info(`Disconnecting subscription ${topic}`);
clearInterval(poll);
clearTimeout(kill);
return iterator.return();
}
};
}
So createPoller creates an AsyncIterable, but then creates a shallow copy of it and returns that. graphql-subscriptions uses iterall's isAsyncIterable for the check that's producing the error you're seeing. Because of the way isAsyncIterable works, a shallow copy won't fly. You can see this for yourself:
const { PubSub } = require('graphql-subscriptions')
const { isAsyncIterable } = require('iterall')
const pubSub = new PubSub()
const iterable = pubSub.asyncIterator('test')
const copy = { ...iterable }
console.log(isAsyncIterable(iterable)) // true
console.log(isAsyncIterable(copy)) // false
So, instead of returning a shallow copy, createPoller should just mutate the return method directly:
export default function createPoller(...) {
...
iterator.return = () => { ... }
return iterator
}

Lambdas stop invoking after a period of time

Here's my setup:
A Python 3.6 lambda function, which I want to keep pre-warmed at a certain concurrency level (say, 10). The lambda's initialization is painful enough that I don't want to inflict this cost on visitors at random. I call these lambdas "workers"
A Node lambda function which runs every 5 minutes to try to pre-warm 10 instances. It uses the Event invocation type for 9 of them, and RequestResponse for 1. There's only either one or zero of this lambda running at any one time. I call this a "warmer".
I followed the guidelines at [https://www.jeremydaly.com/lambda-warmer-optimize-aws-lambda-function-cold-starts/], namely:
Don’t ping more often than every 5 minutes
Invoke the function directly (i.e. don’t use API Gateway to invoke it)
Pass in a test payload that can be identified as such
Create handler logic that replies accordingly without running the whole function
Here's a problem: this works great for several minutes. Then, as I watch the logs, I start to get timeouts from my worker lambda invocations. The timeouts quickly take over all the invocations that the warmer is trying to launch.
Now, no worker lambdas are prewarmed any more. But the warmer keeps on trying, on a Cloudwatch event cron schedule, suffering 100% timeouts. Finally, Lambda stops trying to launch my worker lambdas at all. It feels like some aspect of Lambda's getting its state scrambled. The only way to recover is to re-deploy the lambda. That buys me another hour with pre-warmed lambdas working.
Questions:
How do I get visibility into why my worker lambdas start timing out, and then become completely non-responsive?
What is the definition of a "Concurrent Execution"? On the main Lambda dashboard it shows me this chart of them. Yet, it seems to have more than twice as many Concurrent Executions as I'm requesting.
Here's the warmup lambda code (Node):
// warmer
"use strict";
/** Generated by Serverless WarmUP Plugin at ${new Date().toISOString()} */
const aws = require("aws-sdk");
aws.config.region = "${this.options.region}";
const lambda = new aws.Lambda({httpOptions: {timeout: 60000}});
const functionNames = ${JSON.stringify(functionNames)};
const delay = ms => new Promise(res => setTimeout(res, ms))
const concurrency = 10;
module.exports.warmUp = async (event, context, callback) => {
console.log("Warm Up Start");
const invokes = await Promise.all(functionNames.map(async (functionName) => {
let invocations = [];
try {
for(let i=1;i <= concurrency;i++){
let params = {
FunctionName: functionName,
InvocationType: (i===concurrency)?'RequestResponse': 'Event',
LogType: 'None',
Qualifier: process.env.SERVERLESS_ALIAS || "$LATEST",
Payload: JSON.stringify({
source: 'serverless-plugin-warmup',
'__WARMER_INVOCATION__': i,
'__WARMER_CONCURRENCY__': concurrency,
'__WARMER_REQUESTED__': new Date().toISOString(),
})
};
invocations.push(lambda.invoke(params).promise())
}
return await delay(75).then(Promise.all(invocations.map(p => p.catch(e => e)))
.then(results => console.log('results', results))
.catch(e => {
console.log(e);
return e;
}
))
} catch (e) {
console.log(\`Warm Up Invoke Error: \${functionName}\`, e);
return false;
}
}));
console.log(\`Warm Up Finished\`);
}
And here's the worker lambda (Python):
source = event.get('source')
if source == 'serverless-plugin-warmup':
time.sleep(0.05)
print(event)
return lambda_gateway_response(200, {"status": "lambda warmup"})
It was the warmer (Node) lambda going haywire, even though all the logs pointed at the worker (Python) lambdas. After setting context.callbackWaitsForEmptyEventLoop = false, the problem disappeared.

Node.js Express mongoose query find

I have a little problem with Express and mongoose using Node.js . I pasted the code in pastebin, for a better visibility.
Here is the app.js: http://pastebin.com/FRAFzvjR
Here is the routes/index.js: http://pastebin.com/gDgBXSy6
Since the db.js isn't big, I post it here:
var mongoose = require('mongoose'),
Schema = mongoose.Schema;
module.exports = function () {
mongoose.connect('mongodb://localhost/test',
function(err) {
if (err) { throw err; }
}
);
};
var User = new Schema({
username: {type: String, index: { unique: true }},
mdp: String
});
module.exports = mongoose.model('User', User);
As you can see, I used the console.log to debug my app, and I found that, in routes/index.js, only the a appeared. That's weird, it's as if the script stopped (or continue without any response) when
userModel.findOne({username: req.body.username}, function(err, data)
is tried.
Any idea?
You never connect to your database. Your connect method is within the db.export, but that is never called as a function from your app.
Also, you are overwriting your module.exports - if you want multiple functions/classes to be exported, you must add them as different properties of the module.export object. ie.:
module.export.truthy = function() { return true; }
module.export.falsy = function() { return false; }
When you then require that module, you must call the function (trueFalse.truthy();) in order to get the value. Since you never execute the function to connect to your database, you are not recieveing any data.
A couple of things real quick.
Make sure you're on the latest mongoose (2.5.3). Update your package.json and run npm update.
Try doing a console.log(augments) before your if (err). It's possible that an error is happening.
Are you sure you're really connecting to the database? Try explicitly connecting at the top of your file (just for testing) mongoose.connect('mongodb://localhost/my_database');
I'll update if I get any other ideas.

Resources