I'm using jovo framework (version 1.0.0) and I'm facing the following problem:
In app.js:
app.setHandler({
'LAUNCH': function() {
if(this.user().isNewUser()) {
this.tell('This will never be told on AWS Lambda.');
}
}
});
Running locally I can distinguish between (isNewUser === true) and (isNewUser === false) but as soon as I'm executing it as a lambda function on AWS isNewUser is always false. Why is that?
and additionally
NEW_USER': function() {
}
Isn't triggered either.
System Environment on local machine:
Windows 10 Home
NodeJS: v8.9.1
Lambda function:
NodeJS 6.10
I really appreciate any help you can provide.
Both 'NEW_USER' and this.user().isNewUser() need to have access to a database, where the number of sessions is stored for each user.
When you're prototyping locally, it uses the default File Persistence database integration, which saves the data to a local db/db.json file.
However, on AWS Lambda, the local db doesn't work, so you need to set up a DynamoDB configuration. Learn more here: Jovo Framework Docs > Database Integrations > DynamoDB.
Please remember to give your Lambda function role the right permission to access DynamoDB data: AWS Lambda Permissions Model.
Related
Our team has decided to adopt the AWS CDKV2.0 to build and manage our AWS resources. We are also using The AWS Deployment Framework to manage the deployment process by creating Code Pipelines and using AWS Code build.
The Setup we currently have works for the most part. We seem to have stumbled upon an issue when we attempt to deploy any of our resource that contain assets such as lambdas. Specifically I am talking about lambdas which are not included in-line within the synthezied cloudformation template as per this example.
In other words our lambda code is expected to be uploaded to S3 before being deployed,I am looking for best practice guides on how to configure our accounts and ADF with the CDK to deploy assets which require uploading to S3. At the moment all I can think of is either Bootstrapping the accounts we deploy to and/or customising the CDK synthesizer as part of our stack definition, any guidance or thoughts would be appreciated!
In other words our lambda code is expected to be uploaded to S3 before being deployed
Luckily, no. The CDK's lambda constructs automagically handle local asset bundling and S3 uploading out of the box. The CDK also accepts inline code an existing S3 buckets as code sources. And Docker images.
// aws_cdk.aws_lambda
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_12_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler')),
});
Even better, the CDK provides runtime-specific lambda constructs (in dedicated modules) to make life even easier. For instance, CDK will transpile your ts or build your go executable for you and send the artifacts to a CDK-managed S3 bucket on cdk deploy.
// aws_cdk.aws_lambda_nodejs
new NodejsFunction(this, 'MyFunction', {
entry: '/path/to/my/file.ts', // accepts .js, .jsx, .ts and .tsx files
handler: 'myExportedFunc', // defaults to 'handler'
});
// aws_cdk.aws_lambda_go_alpha
new GoFunction(this, 'handler', {
entry: 'app/cmd/api',
});
// aws_cdk.aws_lambda_python_alpha
new PythonFunction(this, 'MyFunction', {
entry: '/path/to/my/function', // required
runtime: Runtime.PYTHON_3_8, // required
index: 'my_index.py', // optional, defaults to 'index.py'
handler: 'my_exported_func', // optional, defaults to 'handler'
});
I am trying to get all the instance(server name) ID based on the app. Let's say I have an app in the server. How do I know which apps below to which server. I want my code to find all the instance (server) that belongs to each app. Is there any way to look through the app in the ec2 console and figure out the servers are associated with the app. More of using tag method
import boto3
client = boto3.client('ec2')
my_instance = 'i-xxxxxxxx'
(Disclaimer: I work for AWS Resource Groups)
Seeing your comments that you use tags for all apps, you can use AWS Resource Groups to create a group - the example below assumes you used App:Something as tag, first creates a Resource Group, and then lists all the members of that group.
Using this group, you can for example get automatically a CloudWatch dashboard for those resources, or use this group as a target in RunCommand.
import json
import boto3
RG = boto3.client('resource-groups')
RG.create_group(
Name = 'Something-App-Instances',
Description = 'EC2 Instances for Something App',
ResourceQuery = {
'Type': 'TAG_FILTERS_1_0',
'Query': json.dumps({
'ResourceTypeFilters': ['AWS::EC2::Instance'],
'TagFilters': [{
'Key': 'App',
'Values': ['Something']
}]
})
},
Tags = {
'App': 'Something'
}
)
# List all resources in a group using a paginator
paginator = RG.get_paginator('list_group_resources')
resource_pages = paginator.paginate(GroupName = 'Something-App-Instances')
for page in resource_pages:
for resource in page['ResourceIdentifiers']:
print(resource['ResourceType'] + ': ' + resource['ResourceArn'])
Another option to just get the list without saving it as a group would be to directly use the Resource Groups Tagging API
What you install on an Amazon EC2 instance is totally up to you. You do this by running code on the instance itself. AWS is not involved in the decision of what you install on the instance, nor does it know what you installed on an instance.
Therefore, you will need to keep track of "what apps are installed on what server" yourself.
You might choose to take advantage of Tags on instances to add some metadata, such as the purpose of the server. You could also use AWS Systems Manager to run commands on instances (eg to install software) or even use AWS CodeDeploy to roll-out software to fleets of servers.
However, even with all of these deployment options, AWS cannot track what you have put on each individual server. You will need to do that yourself.
Update: You can use AWS Resource Groups to view/manage resources by tag.
Here's some sample Python code to list tags by instance:
import boto3
ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2')
instances = ec2_resource.instances.all()
for instance in instances:
for tag in instance.tags:
print(instance.instance_id, tag['Key'], tag['Value'])
I can't seem to find anything official about this: Does Parse.Config work on Parse Server? It used to work on Parse.com but when I try to migrate to Parse.Server, when trying the REST API it seem to fail:
GET http://localhost:1337/parse/config
Passing in my app ID. I read somewhere Config does not work on Parse Server, but wanted to confirm
Although is not officially supported as mentioned on the docs,there is a way to make it work. It is still an experimental implementation though.
As mentioned here & here, you should set the environment variable:
PARSE_EXPERIMENTAL_CONFIG_ENABLED=1
Then restart your node server. In case you deployed it on heroku for example you should on cli heroku restart -a <APP_NAME>
If that doesn't work I would suggest to simply add your route with your configuration options on your project's index.js file where express is initialized like so.
var parseConfig = {
"params": { /*...put your options here*/ }
};
// :one? is for old SDK compatibility while is optional parameter.
app.all('/parse/:one?/config', function (req, res) {
res.json(parseConfig);
});
I have an an AWS Lambda function that needs to connect to a remote TCP service. Is there any way to configure the Lambda function with the IP address of the remote service after the Lambda function has been deployed to AWS? Or do I have to bake the configuration into the packaged Lambda function before it's deployed?
I found a way that I use for supporting a test environment and a production environment that will help you.
For the test version of the function, I am calling it TEST-ConnectToRemoteTcpService and for the production version of the function I am naming the function PRODUCTION-ConnectToRemoteTcpService. This allows me pull out the environment name using a regular expression.
Then I am storing config/test.json and config/production.json in the zip file that I upload as code for the function. This zip file will be extracted into the directory process.env.LAMBDA_TASK_ROOT when the function runs. So I can load that file and get the config I need.
Some people don't like storing the config in the code zip file, which is fine - you can just load a file from S3 or use whatever strategy you like.
Code for reading the file from the zip:
const readConfiguration = () => {
return new Promise((resolve, reject) => {
let environment = /^(.*?)-.*/.exec(process.env.AWS_LAMBDA_FUNCTION_NAME)[1].toLowerCase();
console.log(`environment is ${environment}`);
fs.readFile(`${process.env.LAMBDA_TASK_ROOT}/config/${environment}.json`, 'utf8', function (err,data) {
if (err) {
reject(err);
} else {
var config = JSON.parse(data);
console.log(`configuration is ${data}`);
resolve(config);
}
});
});
};
Support for environment variables was added for AWS Lambda starting November 18, 2016. Adding a variable to an existing function can be done via command line as shown below or from the AWS Console.
aws lambda update-function-configuration \
--function-name MyFunction \
--environment Variables={REMOTE_SERVICE_IP=100.100.100.100}
Documentation can be found here.
You can invoke the Lambda function via SNS topic subscription and have it configure itself from the payload inside the SNS event.
Here's the official guide on how to do that Invoking Lambda via SNS.
A few options, depending on the use-case
If your config will not change then you can use S3 objects and access from Lambda or set your Lambda to trigger on new config changes. (Though this is the cheapest way, you are limited in what you can do compared to other alternatives)
If the config is changing constantly, then DynamoDB - Key/value is an alternative.
If DynamoDB is expensive for the frequent read/writes and not worth the value then you can have TCP service post config into a SQS queue. (or an SNS if you want to trigger when the service posts a new config)
I recently implemented the security of my Parse app thinking that I could use the master key on my server (express not cloud code) to securely bypass my security implementations for admin/server level functions.
I'm using "parse": "^1.5.0",
in my package.json.
Right now in each of my express modules I have:
var Parse = require('parse').Parse;
Parse.initialize("Application ID", "Javascript Key", "Master Key");
Everything works fine without CLPs activated but with CLPs I can't do any read/write of the data with the server. I understand that I can move this to Cloud code and get it to work however I need to use a number of libraries in my code that Parse does not support and transporting all of the code to cloud code would be very tough.
What am I doing wrong?
Here's what worked for me.
/////////////////////////////////this is the top of the JS page/module/////
'use strict';
var Parse = require('parse/node');
Parse.initialize('app-id','js-key','master-key');
exports.create = function(req, res) {
Parse.Cloud.useMasterKey();
//now when you do a parse query or action you can bypass your security settings.
};