I have published my NestJS app to the AWS Lambda
When I try to open the root URL
https://xxx/
it shows "Hello World" correctly
But when I open up :
https://xxx/sales/subscription
it shows Missing Authentication Token message
Has anyone experienced this kind of issue before?
I have fixed the issue, I'm sharing the solution here, hopefully, it helps anyone having the same issue.
So, apparently Missing Authentication Token means the route does not exist.
The app was deployed to AWS Lambda using the serverless framework.
I fixed the issue by simply opening the serverless.yaml file, and then registered the route in the functions section.
Before :
functions:
main: # The name of the lambda function
# The module 'handler' is exported in the file 'src/lambda'
handler: src/lambda.handler
events:
- http:
method: any
path: /
After :
functions:
main: # The name of the lambda function
# The module 'handler' is exported in the file 'src/lambda'
handler: src/lambda.handler
events:
- http:
method: any
path: /
- http:
method: any
path: /sales/subscription
Related
i want to use custom domain name for my lambda api. i found plugin serverless-domain-manager.
what i did is
install plugin ,
create custom domain name in aws/ api gateway :uat-api.mydomain.com
create a dns record with point to my custom-domain name
add custom confige to serverless.yml file
custom:
customDomain:
domainName: uat-api.mydomain.com
basePath: api
certificateName: som-cert-name.com
certificateArn: arnid
createRoute53Record: true
endpointType: ‘regional’
securityPolicy: tls_1_2
apiType: rest
autoDomain: false
hostedZoneId: Z1I1XQT4F25333
No when i run sls create_domain i got error:
[AWS apigatewayv2 403 3.044s 0 retries] getDomainName({ DomainName: ‘uat-api.mydomain.com’ })
Error --------------------------------------------------
Error: Unable to fetch information about uat-api.mydomain.com
at APIGatewayWrapper.<anonymous> (/Users/../node_modules/serverless-domain-manager/dist/src/aws/api-gateway-wrapper.js:112:27)
at Generator.throw (<anonymous>)
at rejected (/../node_modules/serverless-domain-manager/dist/src/aws/api-gateway-wrapper.js:6:65)
at process._tickCallback (internal/process/next_tick.js:68:7)
So , whats wrong and does i miss something?
I know that's the configuration provided on the npm/github readme, but frankly it seems like overkill.
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'test.****.io'
basePath: 'somePath'
stage: ${self:provider.stage}
createRoute53Record: true
This is the configuration I use - the domain is hosting in Route53 but otherwise I don't need to mess with the AWS console at all when I create a new subdomain (in this case create_domain did everything).
the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get
This question is in relation to a cloudformation template which tries to create lambda functions. The template is in codecommit and uses codepipeline to create tha lambda. But I am struggling to specify the "code" property. The actual code for the lambda function is in my codecommit repo. Below is the example on AWS documentation. But below code appears to take the code from a S3 bucket. Do I specify the file name? if so in what format, thank you.
AMIIDLookup:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "LambdaExecutionRole"
- "Arn"
Code:
S3Bucket: "lambda-functions"
S3Key: "amilookup.zip"
Runtime: "nodejs8.10"
Timeout: 25
TracingConfig:
Mode: "Active"
Further info - Here is my cloudformation template- which is pushed to the codecommit repo. Templete and the pipeline work perfectly with inline code. But I do not know how to specify the code to be taken from the file in the code commit repo. E.g. if the code is in a file - ./abc/index.js
Resources:
LFVQS1:
Type: 'AWS::Lambda::Function'
Properties:
Handler: 'index.function_name**'
Role: 'arn:aws:iam::561731601292:role/service-role/mailfwd-role-m5rl5tu3'
Runtime: "nodejs8.10"
Code: {
ZipFile: "exports.wrtiteToConsole = function (event, context, callback){ console.log('Hello'); callback(null); }" }
If you're asking in the context of CodePipeline (based on the tags), you can either use the ParameterOverrides configuration property of the CloudFormation action to reference the CodePipeline artifact (stored in S3) or use the S3 publish action and reference the location in your CloudFormation template.
CloudFormation action reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-action-reference.html
When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.
I am using serverless framework in c# to execute queries in athena. AWS Lamda function deleted automatically. When i am trying to deploy it, it's not happening.
sls deploy --stage dev -- To deploy function
sls remove --stage dev -- To remove function
When i tried to redeploy it, it's giving error like below:
As they have mentioned in above screenshot, for more error output i have browsed the link: which shows stack detail. I have attached it below
Refer this image:
[![enter image description here][2]][2]
serverless.yml
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: management-athena
custom:
defaultStage: dev
currentStage: ${opt:stage, self:custom.defaultStage} # 'dev' is default unless overriden by --stage flag
provider:
name: aws
runtime: dotnetcore2.1
stage: ${self:custom.currentStage}
role: arn:aws:iam::***********:role/service-role/nexus_labmda_schema_partition # must validly reference a role defined in your account
timeout: 300
environment: # Service wide environment variables
DATABASE_NAME: ${file(./config/config.${self:custom.currentStage}.json):DATABASE_NAME}
TABLE_NAME: ${file(./config/config.${self:custom.currentStage}.json):TABLE_NAME}
S3_PATH: ${file(./config/config.${self:custom.currentStage}.json):S3_PATH}
MAX_SITE_TO_BE_PROCESSED: ${file(./config/config.${self:custom.currentStage}.json):MAX_SITE_TO_BE_PROCESSED}
package:
artifact: bin/release/netcoreapp2.1/deploy-package.zip
functions:
delete_partition:
handler: CsharpHandlers::AwsAthena.AthenaHandler::DeletePartition
description: Lambda function which runs at specified interval to delete athena partitions
# The `events` block defines how to trigger the AthenaHandler.DeletePartition code
events:
- schedule:
rate: cron(0 8 * * ? *) #triggered every day at 3:00 AM EST.Provided time is in UTC. So 3 A.M EST is 8 A.M UTC
enabled: true
I found out the solution!
Sometimes we won't be able to deploy lamda functions because of many reasons. as #ASR mentioned in comments, there might serverless framework's version issues. But in my case, that didn't solve. Just try deleting the logs group of your function from the cloud watch.
Go to aws -> expand services -> select CloudWatch -> select Logs -> search for your log group select it and delete it. Let's say if your function name is my_function then your log group name will be something like this: aws/lamda/my_function
Then just deploy your lamda function.
I am posting this thinking that it helps someone...! Please correct me if i am wrong.