Putting to local DynamoDB table with Python boto3 times out - aws-lambda

I am attempting to programmatically put data into a locally running DynamoDB Container by triggering a Python lambda expression.
I'm trying to follow the template provided here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.03.html
I am using the amazon/dynamodb-local you can download here: https://hub.docker.com/r/amazon/dynamodb-local
Using Ubuntu 18.04.2 LTS to run the container and lambda server
AWS Sam CLI to run my Lambda api
Docker Version 18.09.4
Python 3.6 (You can see this in sam logs below)
Startup command for python lambda is just "sam local start-api"
First my Lambda Code
import json
import boto3
def lambda_handler(event, context):
print("before grabbing dynamodb")
# dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000",region_name='us-west-2',AWS_ACCESS_KEY_ID='RANDOM',AWS_SECRET_ACCESS_KEY='RANDOM')
dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
table = dynamodb.Table('ContactRequests')
try:
response = table.put_item(
Item={
'id': "1234",
'name': "test user",
'email': "testEmail#gmail.com"
}
)
print("response: " + str(response))
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world"
}),
}
I know that I should have this table ContactRequests available at localhost:8000, because I can run this script to view my docker container dynamodb tables
I have tested this with a variety of values in the boto.resource call to include the access keys, region names, and secret keys, with no improvement to result
dev#ubuntu:~/Projects$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": [
"ContactRequests"
]
}
I am also able to successfully hit the localhost:8000/shell that dynamodb offers
Unfortunately while running, if I hit the endpoint that triggers this method, I get a timeout that logs like so
Fetching lambci/lambda:python3.6 Docker container image......
2019-04-09 15:52:08 Mounting /home/dev/Projects/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro inside runtime container
2019-04-09 15:52:12 Function 'HelloWorldFunction' timed out after 3 seconds
2019-04-09 15:52:13 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received:
2019-04-09 15:52:13 127.0.0.1 - - [09/Apr/2019 15:52:13] "GET /hello HTTP/1.1" 502 -
Notice that none of my print methods are being triggered, if I remove the call to table.put, then the print methods are successfully called.
I've seen similar questions on Stack Overflow such as this lambda python dynamodb write gets timeout error that state that the problem is I am using a local db, but shouldn't I still be able to write to a local db with boto3, if I point it to my locally running dynamodb instance?

Your Docker container running the Lambda function can't reach the DynamoDB at 127.0.0.1. Try instead the name of your DynamoDB local docker container as the host name for the endpoint:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")
You can use docker ps to find the <DynamoDB_LOCAL_NAME> or give it a name:
docker run --name dynamodb amazon/dynamodb-local
and then connect:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://dynamodb:8000")

Found the solution to the problem here: connecting AWS SAM Local with dynamodb in docker
The question asker noted that he saw online that he may need to connect to the same docker network using:
docker network create lambda-local
So created this network, then updated my sam command and my docker commands to use this network, like so:
docker run --name dynamodb -p 8000:8000 --network=local-lambda amazon/dynamodb-local
sam local start-api --docker-network local-lambda
After that I no longer experienced the timeout issue.
I'm still working on understanding exactly why this was the issue
To be fair though, it was important that I use the dynamodb container name as the host for my boto3 resource call as well.
So in the end, it was a combination of the solution above and the answer provided by "Reto Aebersold" that created the final solution
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")

Related

describe_instance_status with boto3 with filter "running" skips instances for region ap-east-1

The following code snippet uses the latest version of boto3 and looks for all "running" instances in ap-east-1, where the client is created with the specific region (ap-east-1)
try:
running_instances = ec2.describe_instance_status(
Filters=[
{
"Name": "instance-state-name",
"Values": ["running"],
},
],
InstanceIds=<list of instance_ids>,
)
except ClientError as e:
<catch exception>
The result is an empty list even though there are running Ec2 instances.
The above snippet works for all other regions though.
The AWS command aws ec2 describe-instance-status --region ap-east-1 --filter Name="instance-state-name",Values="running" --instance-id <list of instance ids> returns the running instances with the same filter.
What I am missing for this region specifically while using boto3?
Is there a specific version of boto3 that works for ap-east-1 region?
https://github.com/boto/boto3/issues/3575 - I asked the question here and they helped with debugging it.
Org Payer Level Regional STS needed to be enabled for such regions.

Specifying MSK credentials in an AWS CDK stack

I have code that seems to "almost" deploy. It will fail with the following error:
10:55:25 AM | CREATE_FAILED | AWS::Lambda::EventSourceMapping | QFDSKafkaEventSour...iltynotifyEFE73996
Resource handler returned message: "Invalid request provided: The secret provided in 'sourceAccessConfigurations' is not associated with cluster some-valid-an. Please provide a secret associated with the cluster. (Service: Lambda, Status Code: 400, Request ID: some-uuid )" (RequestToken: some-uuid, HandlerErrorCode: InvalidRequest)
I've cobbled together the cdk stack from multiple tutorials, trying to learn CDK. I've gotten it to the point that I can deploy a lambda, specify one (or more) layers for the lambda, and even specify any of several different sources for triggers. But our production Kafka requires credentials... and I can't figure out for the life of me how to supply those so that this will deploy correctly.
Obviously, those credentials shouldn't be included in the git repo of my codebase. I assume I will have to set up a Secrets Manager secret with part or all of the values. We're using scram-sha-512, and it includes a user/pass pair. The 'secret_name' value to Secret() is probably the name/path of the Secrets Manager secret. I have no idea what the second, unnamed param is for, and I'm having trouble figuring that out. Can anyone point me in the right direction?
Stack code follows:
#!/usr/bin/env python3
from aws_cdk import (
aws_lambda as lambda_,
App, Duration, Stack
)
from aws_cdk.aws_lambda_event_sources import ManagedKafkaEventSource
from aws_cdk.aws_secretsmanager import Secret
class ExternalRestEndpoint(Stack):
def __init__(self, app: App, id: str) -> None:
super().__init__(app, id)
secret = Secret(self, "Secret", secret_name="integrations/msk/creds")
msk_arn = "some valid and confirmed arn"
# Lambda layer.
lambdaLayer = lambda_.LayerVersion(self, 'lambda-layer',
code = lambda_.AssetCode('utils/lambda-deployment-packages/lambda-layer.zip'),
compatible_runtimes = [lambda_.Runtime.PYTHON_3_7],
)
# Source for the lambda.
with open("src/path/to/sourcefile.py", encoding="utf8") as fp:
mysource_code = fp.read()
# Config for it.
lambdaFn = lambda_.Function(
self, "QFDS",
code=lambda_.InlineCode(mysource_code),
handler="lambda_handler",
timeout=Duration.seconds(300),
runtime=lambda_.Runtime.PYTHON_3_7,
layers=[lambdaLayer],
)
# Set up the event (managed Kafka).
lambdaFn.add_event_source(ManagedKafkaEventSource(
cluster_arn=prototype_mks,
topic="foreign.endpoint.availabilty.notify",
secret=secret,
batch_size=100, # default
starting_position=lambda_.StartingPosition.TRIM_HORIZON
))
Looking into a code sample, I understand that you are working with Amazon MSK as an event source, and not just self-managed (cross-account) Kafka.
I assume I will have to set up a Secrets Manager secret with part or all of the values
You don't need to setup credentials. If you use MSK with SALS_SCRAM, you already have credentials, which must be associated with MSK cluster.
As you can see from the doc, you secret name should start with AmazonMSK_, for example AmazonMSK_LambdaSecret.
So, in the code above, you will need to fix this line:
secret = Secret(self, "Secret", secret_name="AmazonMSK_LambdaSecret")
I assume you already aware of the CDK python doc, but will just add here for reference.

How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

How to download AWS Lambda Layer

Using the AWS CLI is it possible to download a Lambda Layer?
I have seen this documented command.
https://docs.aws.amazon.com/lambda/latest/dg/API_GetLayerVersion.html
But when I try to run it with something like below.
aws lambda get-layer-version --layer-name arn:aws:lambda:us-east-1:209497400698:layer:php-73 --version-number 7
I get this error.
An error occurred (InvalidParameterValueException) when calling the
GetLayerVersion operation: Invalid Layer name:
arn:aws:lambda:us-east-1:209497400698:layer:php-73
Is downloading a layer possible via the CLI?
As an extra note I am trying to download any of these layers
https://runtimes.bref.sh/
It should be possible to download a layer programmatically using the AWS CLI. For example
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version.html
URL=$(aws lambda get-layer-version --layer-name YOUR_LAYER_NAME_HERE --version-number YOUR_LAYERS_VERSION --query Content.Location --output text)
curl $URL -o layer.zip
For the arn's in that web page, I had to use the other api which uses an arn value. For example:
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version-by-arn.html
URL=$(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:209497400698:layer:php-73:7 --query Content.Location --output text)
curl $URL -o php.zip
HTH
-James

I have code which run in lambda but not in python

I have code which run in lambda but same is not work on my system.
asgName="test"
def lambda_handler(event, context):
client = boto3.client('autoscaling')
asgName="test"
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
...
...
...
my below code i try to run in linux but prompt error "No such ASG"
asgName="test"
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
The first thing to check is that you are connecting to the correct AWS region. If not specified, it defaults to us-east-1 (N. Virginia). A region can also be specified in the credentials file.
In your code, you can specify the region with:
client = boto3.client('autoscaling', region_name = 'us-west-2')
The next thing to check is that the credentials are associated with the correct account. The AWS Lambda function is obviously running in your desired account, but you should confirm that the code running "in linux" is using the same AWS account.
You can do this by using the AWS Command-Line Interface (CLI), which will use the same credentials as your Python code on the Linux computer. Run:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test
It should give the same result as the Python code running on that computer.
You might need to specify the region:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test --region us-west-2
(Of course, change your region as appropriate.)

Resources