How to use NestJS modules in aws lambda functions - aws-lambda

I have an NestJS application which consist of many modules: databaseModule, userModule, devicesModule, etc. They all packaged to one module ApplicationModule, which handle server actions.
But now I need to add some lambda function to my project and I need to exec some methods from databaseModule, but I don't know how to do it.
Pseudo code that I imagine:
export const handler: Handler = (event: any, context: Context, callback) => {
const dbModule = DataBaseModule.build();
dbModule.get(UserService).createProject('my_project');
callback(null, event);
};
I think that nestjs should have similar functionality but I can't find it on official page.
P.S. I can't use just UserService because it depends on other services and providers in DatabaseModule. That is why I want this module to be fully configured and I can use its services

I found an answer https://docs.nestjs.com/application-context
We can use our submodules in the next way:
const app = await NestFactory.create(ApplicationModule);
const tasksService = app.get(TasksService);

You can use the Lifecycle events from NestJS (https://docs.nestjs.com/fundamentals/lifecycle-events).
That way, you can implement OnModuleInit() on the Service and set the desired function to run when the Module is loaded (it can be sync or async).
There are some other events that can also be useful, like onApplicationBootstrap()

Related

Lambda not sending binary data using load balancer pdf

I am using aws lambda with an application load balancer. I have been fighting to figure out why I am getting only a piece of my content (a few bytes).
I am try to return a PDF file as binary data. This works local and it works when my application is deployed to ec2 or anywhere other than lambda.
I have looked at answers like this one AWS Lambda fails to return PDF file
but these answers are all discussing lambda settings in regards to API gateway. I am not using API gateway I am using the application load balancer. Any ideas?
Found the answer! I am using serverless-http with express. I needed to add the binary parameter for my return content type.
const serverless = require('serverless-http');
const app = require("./index");
const handler = serverless(app, {
binary: ['application/pdf']
});
exports.handler = async (event, context) => {
return await handler(event, context);
};

React + Redux & initial request configuration

I'm working on React+Redux application and i stuck with some app initialization problems, so i wanna ask:
How to make initial setup for application, e.g. where to set default request headers for communication with api?
Lets assume i have some requestManager module which is not react component.
And it's king of proxy, it adds proper headers for every request.
But in case of user log out and log in i need to set proper token in header.
How to accomplish that?
Can not-react component listen for store events?
What are best practices for that?
Are there some good examples?
It you are using webpack. It has Define plugin which you can declare once and use in every .js file anytime
if you have not experience with webpack
https://github.com/petehunt/webpack-howto#6-feature-flags
In webpack configuration.
var definePlugin = new webpack.DefinePlugin({
'process.env': {
"NODE_ENV": JSON.stringify("development"),
"API_HOST": "localhost:3001",
"API_TOKEN": "my-token"
}
})
In js file
if (process.env.NODE_ENV) { // Yep, just call like that
//Whatever
console.log(process.env.API_TOKEN) // print out "my-token"
}

runtime configuration for AWS Lambda function

I have an an AWS Lambda function that needs to connect to a remote TCP service. Is there any way to configure the Lambda function with the IP address of the remote service after the Lambda function has been deployed to AWS? Or do I have to bake the configuration into the packaged Lambda function before it's deployed?
I found a way that I use for supporting a test environment and a production environment that will help you.
For the test version of the function, I am calling it TEST-ConnectToRemoteTcpService and for the production version of the function I am naming the function PRODUCTION-ConnectToRemoteTcpService. This allows me pull out the environment name using a regular expression.
Then I am storing config/test.json and config/production.json in the zip file that I upload as code for the function. This zip file will be extracted into the directory process.env.LAMBDA_TASK_ROOT when the function runs. So I can load that file and get the config I need.
Some people don't like storing the config in the code zip file, which is fine - you can just load a file from S3 or use whatever strategy you like.
Code for reading the file from the zip:
const readConfiguration = () => {
return new Promise((resolve, reject) => {
let environment = /^(.*?)-.*/.exec(process.env.AWS_LAMBDA_FUNCTION_NAME)[1].toLowerCase();
console.log(`environment is ${environment}`);
fs.readFile(`${process.env.LAMBDA_TASK_ROOT}/config/${environment}.json`, 'utf8', function (err,data) {
if (err) {
reject(err);
} else {
var config = JSON.parse(data);
console.log(`configuration is ${data}`);
resolve(config);
}
});
});
};
Support for environment variables was added for AWS Lambda starting November 18, 2016. Adding a variable to an existing function can be done via command line as shown below or from the AWS Console.
aws lambda update-function-configuration \
--function-name MyFunction \
--environment Variables={REMOTE_SERVICE_IP=100.100.100.100}
Documentation can be found here.
You can invoke the Lambda function via SNS topic subscription and have it configure itself from the payload inside the SNS event.
Here's the official guide on how to do that Invoking Lambda via SNS.
A few options, depending on the use-case
If your config will not change then you can use S3 objects and access from Lambda or set your Lambda to trigger on new config changes. (Though this is the cheapest way, you are limited in what you can do compared to other alternatives)
If the config is changing constantly, then DynamoDB - Key/value is an alternative.
If DynamoDB is expensive for the frequent read/writes and not worth the value then you can have TCP service post config into a SQS queue. (or an SNS if you want to trigger when the service posts a new config)

Load async YouTube API in to ReactJS application

I need to load the YouTube JavaScript API which requires you to include a script tag with an onload query string which points towards a global callback function. Once the Google client is loaded the callback gets called:
<script>
function init() {
gapi.client.setApiKey('465723722VeAji1ZVqYiJxB7oyMTVLI');
gapi.client.load('youtube', 'v3', function() {
YouTubeClientLoaded = true;
});
}
</script>
<script src="https://apis.google.com/js/client.js?onload=init"></script>
This all works fine in principle but I'm having a hard time working out how to integrate this global callback in to my ReactJS application. How can I tell react that the client is loaded and ready to use?
I've had a few thoughts but all seem hacky. I thought about starting the React app up and setting a timer that periodically checks for the existence of the YouTubeClientLoaded global variable (or the gapi object) or perhaps a pubsub mechanism so my global init function can emit when it's ready. Problem with the pubsub route is that the pubsub itself would also need to be global so then how do I get that communicating with React...
Is there a more correct way of achieving this?

Use gapi.client javascript to execute my custom Google API

I have a service that is successfully deployed to Google Endpoints and it is accessible through browser.
Now I am trying to load Google API javascript client library to call my services using javascript.
As far as I know, I should do this
gapi.client.load([MY_APP_NAME], 'v1', function() {
var request = gapi.client.[API_NAME].[SERVICE_NAME].[METHOD]();
request.execute(function(jsonResp, rawResp) {...});
);
But I always get an exception at run time complaining about gapi.client.[MY_API_NAME] is undefined. I do the same thing with any Google API (such as Plus) and it works fine. For example, If I load 'plus' API, I will have access to gapi.client.plus... and I can call methods.
Am I missing something? All samples and documents are about Google Service APIs and I could not find a sample for custom APIs (the one that developers write).
I even tried gapi.client.request with different paths (absolute path and relative path) but I get 404 - Not Found error in "status".
var request = gapi.client.request({'path':
'https://[APP_NAME].appspot.com/_ah/api/[SERVICE_NAME]/v1/[METHOD]'
, 'method': 'GET'});
request.execute(function(jsonResp, rawResp) {...});
var request = gapi.client.request({
'path':'/[SERVICE_NAME]/v1/[METHOD]',
'method': 'GET'});
request.execute(function(jsonResp, rawResp) {...});
The problem was a missing parameter in calling gapi.client.load().
I looked at the definition of gapi.client.load at this link https://developers.google.com/api-client-library/javascript/reference/referencedocs#gapiclientload
gapi.client.load(name, version, callback)
which then later I found out is not totally correct and an optional parameter is missing (app_api_root_url).
gapi.client.load(name, version, callback, app_api_root_url)
If the app_api_root_url is missing, the client is loaded for Google Service APIs only (app_api_root_url such as https://myapp.appspot.com/_ah/api)
You can find more details on how to use gapi.client.load() properly at this link https://developers.google.com/appengine/docs/java/endpoints/consume_js
As you can see in the following piece of code, I didn't have ROOT parameter when I was calling gapi.client.load and that is why Google by default was looking at its own service API and obviously could not find my APIs.
var ROOT = 'https://your_app_id.appspot.com/_ah/api';
gapi.client.load('your_api_name', 'v1', function() {
var request = gapi.client.your_api_name.your_method_name();
request.execute(function(jsonResp, rawResp) {
//do the rest of what you need to do
});
}, ROOT);
NOTE: your_app_id is used in ROOT parameter only to load the client script. After loading is done, you will have an object that is named after your API and not your app. That object is like your Java (service) class and you can use to invoke methods directly.

Resources