Force Zappa to use API Gateway HTTP API instead of REST - aws-lambda

I want to use AWS API Gateway HTTP API instead of old REST with my lambda functions, for pricing reasons.
Difference here: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html
What is the option in Zappa for this?
prod:
s3_bucket: mybucket
project_name": myproject
app_function: app.app
aws_region: eu-west-3
domain: my.domain.com
memory_size: 128
lambda_concurrency: 10
runtime: python3.8
timeout_seconds: 30
exception_handler: zappa_sentry.unhandled_exceptions
keep_warm: false
async_resources: false
zappa update prod -s zappa_settings.yml
[EDIT]
Not sure if it's linked, but I encountered Zappa deploy fails with AttributeError: 'Template' object has no attribute 'add_description'
and then using this as requirements.txt (using python3.8):
flask==1.1.4
zappa==0.53.0
zappa_sentry==0.4.1
troposphere<3

As of Sept 2021, there is no support for HTTP API Gateways. There is an issue created to add support here: https://github.com/zappa/Zappa/issues/851

Related

How to deploy GCP functions with Golang 1.15 (Serverless framework)

Is there a way to deploy GCP function with Go 1.15 using Serverless framework ?
Looks like 1.15 is available (https://cloud.google.com/appengine/docs/standard/go/runtime) but I can't find a way to do it with Serverless.
serverless.yml
...
provider:
name: google
runtime: go115
...
I have this Invalid runtime error :
{"ResourceType":"gcp-types/cloudfunctions-v1:projects.locations.functions","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"The request has errors","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"field":"runtime","description":"Invalid runtime."}]}],"statusMessage":"Bad Request","requestPath":"https://cloudfunctions.googleapis.com/v1/projects/**************/locations/us-central1/functions/*****","httpMethod":"PATCH"}}
The GO1.15 runtime is not supported by the regular and beta version of Cloud Functions.
Use Go1.13 instead, or use Cloud Run.

ServerlessError: Documentation part already exists while deploying Serverless AWS apigateway

I am getting [ServerlessError: Documentation part already exists] while deployment when I have updated the version in documentation object from version:"8.0.2" to version:"9.0.1".
serverlessErrorImage1
serverlessErrorImage2
But it succeed when run deployment again by updating version in documentation object from version:"9.0.1" to version:"9.0.2".
custom:
documentation:
api:
info:
version: "9.0.2"
title: "Mock APIs"
description: "Mock Apis for my new project"
I am unable to understand how serverless-aws-documentation versioning is working while deployment.

Can't hit spring-cloud-dataflow HTTP(source) application

I have been following a tutorial to create a stream with spring-cloud-dataflow. It creates the following stream -
http --port=7171 | transform --expression=payload.toUpperCase() | file --directory=c:/dataflow-output
All three applications start up fine. I am using rabbitMQ and if I log in to the rabbit UI I can see that two queues get created for the stream. The tutorial said that I should be able to POST a message to http://localhost:7171 using postman. When I do this nothing happens. I do not get a response, I do not see anything in the queues, and no file is created. In my dataflow logs I can see this being listed.
local: [{"targets":["skipper-server:20060","skipper-server:20052","skipper-server:7171"],"labels":{"job":"scdf"}}]
The tutorial was using an older version of dataflow that I do not believe made use of skipper. Since I am using skipper, does that change the url? I tried http://skipper-server:7171 and http://localhost:7171 but neither of these seem to be reaching the endpoint. I did turn off SSL cert verification in the postman settings.
Sorry for asking so many dataflow questions this week. Thanks in advance.
I found that the port I was trying to hit (7171) which was on my skipper server was not exposed. I had to add and expose the port on the skipper server configuration in my .yml file. I found this post which clued me in.
How to send HTTP requests to my server running in a docker container?
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.1.2.RELEASE
container_name: skipper
expose:
- "7171"
ports:
- "7577:7577"
- "9000-9010:9000-9010"
- "20000-20105:20000-20105"
- "7171:7171"
environment:
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_LOW=20000
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_HIGH=20100
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:1111/dataflow
- SPRING_DATASOURCE_USERNAME=xxxxx
- SPRING_DATASOURCE_PASSWORD=xxxxx
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
- SPRING_RABBITMQ_HOST=127.0.0.1
- SPRING_RABBITMQ_PORT=xxxx
- SPRING_RABBITMQ_USERNAME=xxxxx
- SPRING_RABBITMQ_PASSWORD=xxxxx
entrypoint: "./wait-for-it.sh mysql:1111-- java -Djava.security.egd=file:/dev/./urandom -jar /spring-cloud-skipper-server.jar"

How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

Putting to local DynamoDB table with Python boto3 times out

I am attempting to programmatically put data into a locally running DynamoDB Container by triggering a Python lambda expression.
I'm trying to follow the template provided here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.03.html
I am using the amazon/dynamodb-local you can download here: https://hub.docker.com/r/amazon/dynamodb-local
Using Ubuntu 18.04.2 LTS to run the container and lambda server
AWS Sam CLI to run my Lambda api
Docker Version 18.09.4
Python 3.6 (You can see this in sam logs below)
Startup command for python lambda is just "sam local start-api"
First my Lambda Code
import json
import boto3
def lambda_handler(event, context):
print("before grabbing dynamodb")
# dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000",region_name='us-west-2',AWS_ACCESS_KEY_ID='RANDOM',AWS_SECRET_ACCESS_KEY='RANDOM')
dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
table = dynamodb.Table('ContactRequests')
try:
response = table.put_item(
Item={
'id': "1234",
'name': "test user",
'email': "testEmail#gmail.com"
}
)
print("response: " + str(response))
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world"
}),
}
I know that I should have this table ContactRequests available at localhost:8000, because I can run this script to view my docker container dynamodb tables
I have tested this with a variety of values in the boto.resource call to include the access keys, region names, and secret keys, with no improvement to result
dev#ubuntu:~/Projects$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": [
"ContactRequests"
]
}
I am also able to successfully hit the localhost:8000/shell that dynamodb offers
Unfortunately while running, if I hit the endpoint that triggers this method, I get a timeout that logs like so
Fetching lambci/lambda:python3.6 Docker container image......
2019-04-09 15:52:08 Mounting /home/dev/Projects/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro inside runtime container
2019-04-09 15:52:12 Function 'HelloWorldFunction' timed out after 3 seconds
2019-04-09 15:52:13 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received:
2019-04-09 15:52:13 127.0.0.1 - - [09/Apr/2019 15:52:13] "GET /hello HTTP/1.1" 502 -
Notice that none of my print methods are being triggered, if I remove the call to table.put, then the print methods are successfully called.
I've seen similar questions on Stack Overflow such as this lambda python dynamodb write gets timeout error that state that the problem is I am using a local db, but shouldn't I still be able to write to a local db with boto3, if I point it to my locally running dynamodb instance?
Your Docker container running the Lambda function can't reach the DynamoDB at 127.0.0.1. Try instead the name of your DynamoDB local docker container as the host name for the endpoint:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")
You can use docker ps to find the <DynamoDB_LOCAL_NAME> or give it a name:
docker run --name dynamodb amazon/dynamodb-local
and then connect:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://dynamodb:8000")
Found the solution to the problem here: connecting AWS SAM Local with dynamodb in docker
The question asker noted that he saw online that he may need to connect to the same docker network using:
docker network create lambda-local
So created this network, then updated my sam command and my docker commands to use this network, like so:
docker run --name dynamodb -p 8000:8000 --network=local-lambda amazon/dynamodb-local
sam local start-api --docker-network local-lambda
After that I no longer experienced the timeout issue.
I'm still working on understanding exactly why this was the issue
To be fair though, it was important that I use the dynamodb container name as the host for my boto3 resource call as well.
So in the end, it was a combination of the solution above and the answer provided by "Reto Aebersold" that created the final solution
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")

Resources