How do I use CloudFormation resources in a Lambda function? - aws-lambda

I have added a Redis ElastiCache section to my s-resource-cf.json (a CloudFormation template), and selected its hostname as an output.
"Resources": {
...snip...
"Redis": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.micro",
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "tue:00:30-tue:01:30",
"CacheSubnetGroupName": {
"Ref": "cachesubnetdefault"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgdefault",
"GroupId"
]
}
]
}
}
},
"Outputs": {
"IamRoleArnLambda": {
"Description": "ARN of the lambda IAM role",
"Value": {
"Fn::GetAtt": [
"IamRoleLambda",
"Arn"
]
}
},
"RedisEndpointAddress": {
"Description": "Redis server host",
"Value": {
"Fn::GetAtt": [
"Redis",
"Address"
]
}
}
}
I can get CloudFormation to output the Redis server host when running sls resources deploy, but how can I access that output from within a Lambda function?
There is nothing in this starter project template that refers to that IamRoleArnLambda, which came with the example project. According to the docs, templates are only usable for project configuration, they are not accessible from Lambda functions:
Templates & Variables are for Configuration Only
Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.
So, then how do I set an environment variable to the hostname of the ElastiCache server after it has been created?

You can set environment variables in the environment section of a function's s-function.json file. Furthermore, if you want to prevent those variables from being put into version control (for example, if your code will be posted to a public GitHub repo), you can put them in the appropriate files in your _meta/variables directory and then reference those from your s-function.json files. Just make sure you add a _meta line to your .gitignore file.
For example, in my latest project I needed to connect to a Redis Cloud server, but didn't want to commit the connection details to version control. I put variables into my _meta/variables/s-variables-[stage]-[region].json file, like so:
{
"redisUrl": "...",
"redisPort": "...",
"redisPass": "..."
}
…and referenced the connection settings variables in that function's s-function.json file:
"environment": {
"REDIS_URL": "${redisUrl}",
"REDIS_PORT": "${redisPort}",
"REDIS_PASS": "${redisPass}"
}
I then put this redis.js file in my functions/lib directory:
module.exports = () => {
const redis = require('redis')
const jsonify = require('redis-jsonify')
const redisOptions = {
host: process.env.REDIS_URL,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS
}
return jsonify(redis.createClient(redisOptions))
}
Then, in any function that needed to connect to that Redis database, I imported redis.js:
redis = require('../lib/redis')()
(For more details on my Serverless/Redis setup and some of the challenges I faced in getting it to work, see this question I posted yesterday.)

update
CloudFormation usage has been streamlined somewhat since that comment was posted in the issue tracker. I have submitted a documentation update to http://docs.serverless.com/docs/templates-variables, and posted a shortened version of my configuration in a gist.
It is possible to refer to a CloudFormation output in a s-function.json Lambda configuration file, in order to make those outputs available as environment variables.
s-resource-cf.json output section:
"Outputs": {
"redisHost": {
"Description": "Redis host URI",
"Value": {
"Fn::GetAtt": [
"RedisCluster",
"RedisEndpoint.Address"
]
}
}
}
s-function.json environment section:
"environment": {
"REDIS_HOST": "${redisHost}"
},
Usage in a Lambda function:
exports.handler = function(event, context) {
console.log("Redis host: ", process.env.REDIS_HOST);
};
old answer
Looks like a solution was found / implemented in the Serverless issue tracker (link). To quote HyperBrain:
CF Output variables
To have your lambda access the CF output variables you have to give it the cloudformation:describeStacks access rights in the lambda IAM role.
The CF.loadVars() promise will add all CF output variables to the process'
environment as SERVERLESS_CF_OutVar name. It will add a few ms to the
startup time of your lambda.
Change your lambda handler as follows:
// Require Serverless ENV vars
var ServerlessHelpers = require('serverless-helpers-js');
ServerlessHelpers.loadEnv();
// Require Logic
var lib = require('../lib');
// Lambda Handler
module.exports.handler = function(event, context) {
ServerlessHelpers.CF.loadVars()
.then(function() {
lib.respond(event, function(error, response) {
return context.done(error, response);
});
})
};

Related

How do I access .env variables and use it inside the cypress.json file?

I have five different cypress projects in the same repo.
The Cypress.json file of each project has reporterOptions :
{
"fixturesFolder": "./src/fixtures",
"integrationFolder": "./src/integration",
……..
"reporter": "../../node_modules/mocha-testrail-reporter",
"reporterOptions": {
"username": "my-user-name”,
"password": "my-password",
"host": "https://abc.testrail.io",
"domain": "abc.testrail.io",
"projectId": 1,
"suiteId": 3,
"includeAllInTestRun": true,
"runName": "test"
}
}
The Username, host, password and domain value are same for all five cypress projects. Thus, I want to put them in the .env file like this, and access these variables and use them in the Cypress.json files
USERNAME= my-user-name
PASSWORD= my-password
HOST= https://abc.testrail.io
DOMAIN= abc.testrail.io
How do I access these variables? Any help will be appreciated. Thank you,
Take a look at Extending the Cypress Config File
Cypress does not support extends syntax in its configuration file
But in plugins it can be done
module.exports = (on, config) => {
const reporterParams = require('.env') // not quite sure of the format
// may need to fiddle it
const reportOptions = {
...config.reporterOptions, // spread existing options
"username": reporterParams.username,
"password": reporterParams.password,
"host": reporterParams.host,
"domain": reporterParams.domain,
}
const merged = {
...config,
reportOptions
}
return merged
}

Traefik with dynamic routing to ECS backends, running as one-off tasks

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.
Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

How to deploy Next.js with GraphQL backend on Zeit Now?

I have an Next.js/Express/Apollo GraphQL app running fine on localhost.
I try to deploy it on Zeit Now, and the Next.js part works fine, but the GraphQL backend fails because /graphql route returns:
502: An error occurred with your deployment
Code: NO_STATUS_CODE_FROM_LAMBDA
My now.json looks like:
{
"version": 2,
"builds": [
{ "src": "next.config.js", "use": "#now/next" },
{ "src": "server/server.js", "use": "#now/node" }
],
"routes": [
{ "src": "/api/(.*)", "dest": "server/server.js" },
{ "src": "/graphql", "dest": "server/server.js" }
]
}
Suggestions?
Here’s a complete example of Next.js/Apollo GraphQL running both on Zeit Now (as serverless function/lambda) and Heroku (with an Express server):
https://github.com/tomsoderlund/nextjs-pwa-graphql-sql-boilerplate
I was getting that error until I found on a solution on the Wes Bos slack channel.
The following worked for me, but it's possible you could be getting that error for a different reason.
I'm not sure why it works.
You can see it working here
cd backend
Run npm install graphql-import
Update scripts in package.json:
"deploy": "prisma deploy --env-file variables.env&& npm run writeSchema",
"writeSchema": "node src/writeSchema.js"
Note: For non windows users make sure to place space before &&
Create src/writeSchema.js:
const fs = require('fs');
const { importSchema } = require('graphql-import');
const text = importSchema("src/generated/prisma.graphql");
fs.writeFileSync("src/schema_prep.graphql", text)
Update src/db.js:
const db = new Prisma({
typeDefs: __dirname + "/schema_prep.graphql",
...
});
Update src/createServer.js:
return new GraphQLServer({
typeDefs: __dirname + '/schema.graphql',
...
});
Update src/schema.graphql:
# import * from './schema_prep.graphql'
Create now.json
{
"version": 2,
"name": "Project Name",
"builds": [
{ "src": "src/index.js", "use": "#now/node-server" }
],
"routes": [
{ "src": "/.*", "dest": "src/index.js" }
],
"env": {
"SOME_VARIABLE": "xxx",
...
}
}
Run npm run deploy to initially create schema_prep.graphql.
Run now
Another reply said this:
You should not mix graphql imports and js/ts imports. The syntax on the graphql file will be interpreted by graphql-import and will be ignored by ncc (the compiler which reads the __dirname stuff and move the file to the correct directory etc)
In my example 'schema_prep.graphql' is already preprocessed with the imports from the generated graphql file.
Hopefully this helps.

AWS Lambda Code in S3 Bucket not updating

I am using cloudformation to create my lambda function with the code in a S3Bucket with versioning enabled.
"MYLAMBDA": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": {
"Fn::Sub": "My-Lambda-${StageName}"
},
"Code": {
"S3Bucket": {
"Fn::Sub": "${S3BucketName}"
},
"S3Key": {
"Fn::Sub": "${artifact}.zip"
},
"S3ObjectVersion": "1e8Oasedk6sDZu6y01tioj8X._tAl3N"
},
"Handler": "streams.lambda_handler",
"Runtime": "python3.6",
"Timeout": "300",
"MemorySize": "512",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
}
}
}
The lambda function gets created successfully. When i copy a new artifact zip file to the s3bucket, a new version of the file gets created with the new version "S3ObjectVersion" string. But the lambda function code is still using the older version.
The documentation of aws cloudformation clearly says the following
To update a Lambda function whose source code is in an Amazon S3
bucket, you must trigger an update by updating the S3Bucket, S3Key, or
S3ObjectVersion property. Updating the source code alone doesn't
update the function.
Is there an additional trigger event, i need to create to get the code updated?
In case anyone is running into this similar issue, I have figured out a way in my case. I use Terraform + Jenkins to create my lambda functions through s3 bucket. In the beginning, I can create the functions but it won't update once it created. I verified my zip files in s3 is updated. It took me some time to figure out that I need do one of following two changes.
solution 1: Giving a new object key when load the new zip file. In my terraform I add the git commit id as part of the s3 key.
resource "aws_s3_bucket_object" "lambda-abc-package" {
bucket = "${aws_s3_bucket.abc-bucket.id}"
key = "${var.lambda_ecs_task_runner_bucket_key}_${var.git_commit_id}.zip"
source = "../${var.lambda_ecs_task_runner_bucket_key}.zip"
}
solution 2: add source_code_hash in lambda part.
resource "aws_lambda_function" "abc-ecs-task-runner" {
s3_bucket = "${var.bucket_name}"
s3_key = "${aws_s3_bucket_object.lambda-ecstaskrunner-package.key}"
function_name = "abc-ecs-task-runner"
role = "${aws_iam_role.AbcEcsTaskRunnerRole.arn}"
handler = "index.handler"
memory_size = "128"
runtime = "nodejs6.10"
timeout = "300"
source_code_hash = "${base64sha256(file("../${var.lambda_ecs_task_runner_bucket_key}.zip"))}"
So do either one should work. Also when checking lambda code, refresh the URL from the browser won't work. Need go back Functions and open that function again.
Hope this helps.
I also faced the same issue , my code was in Archive.zip in S3 bucket , when I uploaded a new Archive.zip , lambda was not responding according to new code .
Solution was to again paste the link of S3 location of Archive.zip in lambda's function code section and Save it again.
How I figured out lambda was not taking new code?
Go to your lambda function --> Actions --> Export Function --> Download Deployment Package and check if the code is actually the code that you've recently uploaded to S3 .
You have to update the S3ObjectVersion value to the new version ID in your CloudFormation template itself.
Then you have to update your Cloudformation stack with the new template.
You can do this either on the Cloudformation console or via the AWS CLI.
From AWS CLI you can do an update-function-code call like this post mentions : https://nono.ma/update-aws-lambda-function-code

Parse Server S3 file adapter with Heroku app

I am trying to set up the s3 file adapter but I'm not sure if i am getting the formatting of something incorrect or something. I have followed this:
https://github.com/ParsePlatform/parse-server/wiki/Configuring-File-Adapters#configuring-s3adapter
Guide exactly but when i uncomment the block of code below and put in my aws credentials then push the setup back to Heroku the app or dashboard won't start any longer, saying there is an application error:
//**** File Storage ****//
filesAdapter: new S3Adapter(
{
"xxxxxxxx",
"xxxxxxxx",
"xxxxxxxx",
{directAccess: true}
}
)
I would set it up as follows for Heroku:
Make sure that after performing all steps described in the guide your policy looks similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
Now apply this policy to the bucket: select your bucket in S3 console, tap ‘Properties’ button in the top right corner. Expand ‘Permissions’ section, press ‘Edit bucket policy’ and paste json above in the text field.
Configure Parse Server in the index.js file:
var S3Adapter = require('parse-server').S3Adapter;
var s3Adapter = new S3Adapter(
"AWS_KEY",
"AWS_SECRET_KEY",
"bucket-name",
{ directAccess: true }
);
and add two lines to the Parse Server init (var api = new ParseServer({..})):
filesAdapter: s3Adapter,
fileKey: process.env.PARSE_FILE_KEY
Similar to Cliff's post, .S3Adapter has to be outside the ()
var S3Adapter = require('parse-server').S3Adapter;
And then inside parse server init:
filesAdapter: new S3Adapter(
{
accessKey: process.env.S3_ACCESS_KEY || '',
secretKey: process.env.S3_SECRET_KEY || '',
bucket: process.env.S3_BUCKET || '',
directAccess: true
}
)
This worked in this case.

Resources