I created a lambda function behind an API gateway, and I am trying to implement a healthcheck for it as well, but right now I need to do this in 2 steps:
run serverless deploy so it spits out the endpoint for the api gateway.
manually insert the endpoint into the healthcheck.environment.api_endpoint param, and run serverless deploy a second time
functions:
app:
handler: wsgi_handler.handler
events:
- http:
method: ANY
path: /
- http:
method: ANY
path: '{proxy+}'
healthcheck:
environment:
api_endpoint: '<how to reference the api endpoint?>'
handler: healthcheck.handler
events:
- schedule: rate(1 minute)
custom:
wsgi:
app: app.app
pythonBin: python3
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
Is there a way to get the reference to the api gateway on creation time, so it can be passed as an environment variable to the healthcheck app? the alternative I can think of, is to basically create a specific serverless.yaml just for the healthcheck purpose.
I noticed I can reconstruct the endpoint in the lambda, and grab the id like so:
healthcheck:
environment:
api_endpoint: !Ref ApiGatewayRestApi
region: ${env:region}
handler: healthcheck.handler
events:
- schedule: rate(1 minute)
and then reconstruct:
def handler(event, context):
api_id = os.environ['api_endpoint']
region = os.environ['region']
endpoint = f"https://{api_id}.execute-api.{region}.amazonaws.com/dev"
With a bit of cloudformation you can inject it directly! That way you don't need to compute it everytime in your lambda handler.
api_endpoint: {
'Fn::Join': [
'',
[
{ Ref: 'WebsocketsApi' },
'.execute-api.',
{ Ref: 'AWS::Region' },
'.',
{ Ref: 'AWS::URLSuffix' },
"/${opt:stage, self:provider.stage, 'dev'}",
],
],
},
(this is an example for a websocket API)
Is your API created through the same/another Cloudformation Stack? If so, you can reference is directly (same stack) or through a CloudFormation variable export.
https://carova.io/snippets/serverless-aws-cloudformation-output-stack-variables
https://carova.io/snippets/serverless-aws-reference-other-cloudformation-stack-variables
If you created it outside of CloudFormation, ie. in the aws console, then you'll need to add the ids into the template. Most likely by creating different environment variables based on the stage.
Related
I would like to add the cognito authorizer to my lambda function, but for this I need arn cognito, which is created in the stack coud formation(CognitoUserPool in my resources section). I'm using serverless framework.
part of the serverless.yml file
graphql:
handler: src/lambda-functions/graphql/index.handler
timeout: 30
memorySize: 2048
events:
- http:
path: graphql
method: any
private: true
cors: true
authorizer:
arn:
Fn::Join:
- ''
- - 'arn:aws:cognito-idp:'
- ${self:provider.region}
- ':'
- Ref: AWS::AccountId
- ':userpool/'
- Ref: CognitoUserPool
I am getting an error while deploying the application:
TypeError: functionArn.split is not a function
While debugging, I discovered that the function to which the output from Fn::join should be passed is object:
{"Fn::Join":["",["arn:aws:cognito-idp:","eu-west-1",":",{"Ref":"AWS::AccountId"},":userpool/",{"Ref":"CognitoUserPool"}]]}
And there should be passed to the function already resolve arn for example:
arn:aws:cognito-idp:eu-west-1:XXXXXXXXXX:userpool/eu-west-XXXXXXX
How to force that the output from Fn::join to be computed and pass that value to the arn property?
According to this, giving the authorizer a name lets you use intrinsic functions to refer to the ARN:
graphql:
handler: src/lambda-functions/graphql/index.handler
timeout: 30
memorySize: 2048
events:
- http:
path: graphql
method: any
private: true
cors: true
authorizer:
name: CognitoAuthorizer #supply a unique name here
arn:
!GetAtt CognitoUserPool.Arn
Our API has the following endpoints:
POST /users - create a user
GET /users/{userId} - get a particular user
GET /posts/{postId} - get a particular post
GET /posts/{postId}/users - get the users who contributed to this post
I have defined two services: users-service and posts-service. In these two services I define the lambdas like so. I'm using the serverless-domain-manager plugin to create base path mappings:
/users-service/serverless.yaml:
service: users-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'users'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
create:
name: userCreate
handler: src/create.handler
events:
- http:
path: /
method: post
get:
name: userGet
handler: src/get.handler
events:
- http:
path: /{userId}
method: get
/rooms-service/serverless.yaml:
service: posts-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'posts'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
get:
name: postsGet
handler: src/get.handler
events:
- http:
path: /{postId}
method: get
getUsersForPost:
handler: userGet ?
events: ??
The problem is that the GET /posts/{postId}/users actually calls the same userGet lambda from the users-service. But the source for that lambda lives in the users-service, not the posts-service.
So my question becomes:
How do I reference a service from another service using base path mappings? In other words, is it possible for the posts service to actually make a call to the parent custom domain and into the users base path mapping and its service?
Consider or refer below approach
https://serverless-stack.com/chapters/share-an-api-endpoint-between-services.html
I would like to use an external layer arn:aws:lambda:eu-central-1:347034527139:layer:tf_keras_pillow:1 in my Serverless project.
I do so by having the following in my serverless.yml:
functions:
api:
handler: functions/api/handler.run
layers: arn:aws:lambda:eu-central-1:347034527139:layer:tf_keras_pillow:1
events:
- http:
path: /image/{id}/{mode}
method: get
request:
parameters:
paths:
id: true
mode: true
However, when checking the AWS Lambda function in the console, there is no layer added after deployment. Any ideas?
The only way to add the layer is by manually doing so in the GUI.
The layers value is an array, per the documentation: https://serverless.com/framework/docs/providers/aws/guide/layers#using-your-layers.
functions:
api:
handler: functions/api/handler.run
layers:
- arn:aws:lambda:eu-central-1:347034527139:layer:tf_keras_pillow:1
events:
- http:
path: /image/{id}/{mode}
method: get
request:
parameters:
paths:
id: true
mode: true
Should work.
When I use CloudFormation to deploy an API with a Lambda integration, I'm able to dynamically link a Lambda function to an API method using standard CloudFormation syntax like !Ref and !GetAtt:
SomeMethod:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: PUT
Integration:
Type: AWS_PROXY
IntegrationHttpMethod: POST
# this is defined during deployment
Uri: !Join ["", ["arn:aws:apigateway:", !Ref "AWS::Region", ":lambda:path/2015-03-31/functions/", !GetAtt LambdaFunction.Arn, "/invocations"]]
IntegrationResponses:
- StatusCode: 200
ResourceId: !Ref APIResource
Now when I want to reference an external swagger file to define my API, which I can do via the BodyS3Location property on the AWS::ApiGateway::RestApi object, I can't understand how to dynamically link the defined methods to a Lambda function.
API as Lambda Proxy describes how to statically link a method to a Lambda function, i.e.
"x-amazon-apigateway-integration": {
"credentials": "arn:aws:iam::123456789012:role/apigAwsProxyRole",
"responses": { ... },
# this is statically defined
"uri": "arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:123456789012:function:Calc/invocations",
"httpMethod": "POST",
"requestTemplates": { ... },
"type": "aws"
}
But if my Lambda function is part of the same CloudFormation stack (which it is and should be), and I want to deploy multiple instances, how can I dynamically integrate my API with Lambda? I can't use !Ref or !GetAttr because I'm outside of CloudFormation's context.
I use Serverless Offline to develop a Web project.
I need of API Keys to access to resource on Serverless AWS Lamda.
I have a serverless.yml with my service and my provider.
In Postman, I access to my route (http://127.0.0.1:3333/segments/UUID/test), and I haven't any error (as Forbidden message), the Lambda is executed...
test:
handler: src/Api/segment.test
events:
- http:
path: segments/{segmentUuid}/test
method: post
request:
parameters:
paths:
segmentUuid: true
private: true
The route in question is not protected by private.
https://www.npmjs.com/package/serverless-offline#token-authorizers
Serverless-offline will emulate the behaviour of APIG and create a
random token that's printed on the screen. With this token you can
access your private methods adding x-api-key: generatedToken to your
request header. All api keys will share the same token. To specify a
custom token use the --apiKey cli option.
Command will look like this:
sls offline --apiKey any-pregenerated-key
For local dev use this inside serverless.yml:
custom:
serverless-offline:
apiKey: 'your-key-here'
Or this inside serverless.ts:
custom: {
'serverless-offline': {
apiKey: 'your-key-here',
},
},
Given latest changes this configuration worked for me with serverless offline:
provider: {
name: 'aws',
region: region,
runtime: 'nodejs14.x',
stage: stage,
apiGateway:{
apiKeys: [{
name: 'test name',
value: 'sadasfasdasdasdasdafasdasasd'
}],
},
},
https://github.com/dherault/serverless-offline/issues/963