How to download AWS Lambda Layer - aws-lambda

Using the AWS CLI is it possible to download a Lambda Layer?
I have seen this documented command.
https://docs.aws.amazon.com/lambda/latest/dg/API_GetLayerVersion.html
But when I try to run it with something like below.
aws lambda get-layer-version --layer-name arn:aws:lambda:us-east-1:209497400698:layer:php-73 --version-number 7
I get this error.
An error occurred (InvalidParameterValueException) when calling the
GetLayerVersion operation: Invalid Layer name:
arn:aws:lambda:us-east-1:209497400698:layer:php-73
Is downloading a layer possible via the CLI?
As an extra note I am trying to download any of these layers
https://runtimes.bref.sh/

It should be possible to download a layer programmatically using the AWS CLI. For example
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version.html
URL=$(aws lambda get-layer-version --layer-name YOUR_LAYER_NAME_HERE --version-number YOUR_LAYERS_VERSION --query Content.Location --output text)
curl $URL -o layer.zip
For the arn's in that web page, I had to use the other api which uses an arn value. For example:
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version-by-arn.html
URL=$(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:209497400698:layer:php-73:7 --query Content.Location --output text)
curl $URL -o php.zip
HTH
-James

Related

An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation

When I deploy lambda function using the command below, the error is occured.
aws lambda update-function-code --function-name example --zip-file fileb://lambda.zip
An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation
As far as I looked through, my zip size can not be reduced any more.
How can I avoid this or are there any alternative way to deploy?
There is a limit to direct upload of 50MB:
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
You can use a LambdaLayer as a workaround to this problem:
https://lumigo.io/blog/lambda-layers-when-to-use-it/
https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/
With layers, you got 250MB Limit
A disclosure - I'm a developer in Lumigo, we have just a blogpost on this thread, shared also an official AWS post.
Although layers are a good way to solve this, you can also upload your .zip file to S3 and update your function to the extent of:
aws lambda update-function-code --function-name LAMBDA_NAME --region REGION_NAME --s3-bucket BUCKET_NAME --s3-key S3_KEY/TO/PACKAGE.zip

Passing complex parameters to `aws cloudformation deploy`

From PowerShell I'm calling aws cloudformation deploy against LocalStack using a template generated by the CDK:
aws --endpoint-url http://localhost:4566 cloudformation deploy --template-file ./cdk.out/my.template.json --stack-name my-stack --parameter-overrides functionLambdaSourceBucketNameParameter55F17A81=`{`"BucketName`":`"my-bucket`",`"ObjectKey`":`"code.zip`"`} functionLambdaSourceObjectKeyParameterB7223CBC=`{`"BucketName`":`"my-bucket`",`"ObjectKey`":`"code.zip`"`}
The code executes and I get a cryptic message back: 'Parameters'. Also, the stack events show a failure but don't include any reason:
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": "2020-09-24T19:03:28.388072+00:00",
"ResourceStatus": "CREATE_FAILED"
I assume there is something wrong with the format of my parameters but I cannot find any examples of complex parameter values. The parameters are CodeParameters which have both BucketName and ObjectKey properties.

How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

I have code which run in lambda but not in python

I have code which run in lambda but same is not work on my system.
asgName="test"
def lambda_handler(event, context):
client = boto3.client('autoscaling')
asgName="test"
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
...
...
...
my below code i try to run in linux but prompt error "No such ASG"
asgName="test"
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
The first thing to check is that you are connecting to the correct AWS region. If not specified, it defaults to us-east-1 (N. Virginia). A region can also be specified in the credentials file.
In your code, you can specify the region with:
client = boto3.client('autoscaling', region_name = 'us-west-2')
The next thing to check is that the credentials are associated with the correct account. The AWS Lambda function is obviously running in your desired account, but you should confirm that the code running "in linux" is using the same AWS account.
You can do this by using the AWS Command-Line Interface (CLI), which will use the same credentials as your Python code on the Linux computer. Run:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test
It should give the same result as the Python code running on that computer.
You might need to specify the region:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test --region us-west-2
(Of course, change your region as appropriate.)

AWS Spark Cluster setup errors

I have created an AWS keypair.
I am following the instructions here word for word: https://aws.amazon.com/articles/4926593393724923
When I type in "aws emr create-cluster --name SparkCluster --ami-version 3.2 --instance-type m3.xlarge --instance-count 3 --ec2-attributes KeyName=MYKEY --applications Name=Hive --bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark"
replacing MYKEY with both the full path and just the name of my key pair (I've tried everything), I get the following error:
`A client error (InvalidSignatureException) occurred when calling the RunJobFlow operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
The Canonical String for this request should have been
'POST
/
content-type:application/x-amz-json-1.1
host:elasticmapreduce.us-east-1.amazonaws.com
user-agent:aws-cli/1.7.5 Python/2.7.8 Darwin/14.1.0
x-amz-date:20150210T180927Z
x-amz-target:ElasticMapReduce.RunJobFlow
content-type;host;user-agent;x-amz-date;x-amz-target
dbb58908194fa8deb722fdf65ccd713807257deac18087025cec9a5e0d73c572'
The String-to-Sign should have been
'AWS4-HMAC-SHA256
20150210T180927Z
20150210/us-east-1/elasticmapreduce/aws4_request
c83894ad3b43c0657dac2c3ab7f53d384b956087bd18a3113873fceeabc4ae26'`
What am I doing wrong?
GOT IT. Sadly, the above page mentions nothing about having to set the environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY. You must do this first. I learned you had to do that first from a totally different setup guide: http://spark.apache.org/docs/1.2.0/ec2-scripts.html.
After I set that, the Amazon instructions worked.

Resources