Get previous version of code in AWS Lambda - aws-lambda

I deploy very simple code to AWS Lambda using this commands:
zip http-endpoint lambda_function.py
aws lambda update-function-code --function-name 'http-endpoint' --zip-file fileb://http-endpoint.zip --region us-east-1
Is it possible to see previous version of code somehow?

You can get data about your function using get-function. This returns a code object containing a location to your function's code artifact. See docs here => scroll down to output:
Location -> (string)
The presigned URL you can use to download the function's .zip file that you previously uploaded. The URL is valid for up to 10 minutes.

Related

How do you deploy cloudformation with a lambda function without inline code?

the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get

DevOps : AWS Lambda .zip with Terraform

I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.

cx_Oracle problem in AWS Lambda built using AWS CodeBuild

I'm trying to use cx_Oracle to connect to an RDS(Oracle) database from inside an AWS Lambda function (python3.7). Moreover, the Lambda function itself is automatically built from AWS CodeBuild using a buildspec.yml file. The CodeBuild itself runs by configuring AWS CodePipeline in such a way that whenever the repository where I put my code in (in this case AWS CodeCommit) is updated, it automatically builds the stuff.
Things that I have done:
1. I have an AWS Lambda function with code as follows.
import cx_Oracle
def lambda_handler(event, context):
dsn = cx_Oracle.makedsn('www.host.com', '1521', 'dbname')
connection = cx_Oracle.connect(user='user', password='password', dsn=dsn)
cursor = connection.cursor()
cursor.execute('select * from table_name')
return cursor
Inside the buildspec.yml I have the following build commands.
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install cx_Oracle -t ./ # to install cx_Oracle package in the same directory as the script
- unzip instantclient-basic-linux*.zip -d /opt/oracle # I have downloaded the zip file beforehand
<other code>
-
I have also configured the template.yml of the Lambda function as follows
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Making a test lambda function using codepipeline
Resources:
funcAuthorityReceive:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: testFunction
Environment:
Variables:
PATH: '/opt/oracle/instantclient_19_5:$PATH'
LD_LIBRARY_PATH : '$LD_LIBRARY_PATH:/opt/oracle/instantclient_19_5'
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: 'arn:aws:iam::XXXXXXXXXXXXXX:role/role-for-lambda
Runtime: python3.7
CodeUri: ./
Here, everything runs smoothly and the Lambda function itself gets built, but when I run the lambda this error shows up:
"DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\". See https://oracle.github.io/odpi/doc/installation.html#linux for help"
Any help would be greatly appreciated.
When you want to use cx_Oracle to reach you oracle database, the moment you zip the lambda package (code and other dependencies), make sure you preserve the symlinks
zip --symlinks -r lambda.zip .
I haven't worked with codebuild, but I have build the lambda package in a linux server, soon I will be creating a build pipeline in Azure Devops.

AWS CloudFormation update Lambda Code to use latest version in S3 bucket

I'm trying to create a CloudFormation template supporting Lambda Function and AWS CodeBuild project for building .netcore source code into a deployed zip file in S3 bucket.
Here are the particulars:
Using a GitHub mono-repo with multiple Lambda functions as different projects in the .netcore solution
Each Lambda function (aka .netcore project) has a CloudFormation YAML file generating a stack containing the Lambda function itself and CodeBuild project.
CodeBuild project is initiated from GitHub web hook which retrieves the code from GitHub sub-project and uses its buildspec.yaml to govern how build should happen.
buildspec uses .netcore for building project, then zips and copies output to a target S3 bucket
Lambda function points to S3 bucket for source code
This is all working just fine. What I'm struggling with is how to update Lambda function to use updated compiled source code in S3 bucket.
Here is subset of CloudFormation template:
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: roicalculator-eventpublisher
Handler: RoiCalculator.Serverless.EventPublisher::RoiCalculator.Serverless.EventPublisher.Function::FunctionHandler
Code:
S3Bucket: deployment-artifacts
S3Key: RoiCalculatorEventPublisher.zip
Runtime: dotnetcore2.1
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: RoiCalculator-EventPublisher-Master
Artifacts:
Location: deployment-artifacts
Name: RoiCalculatorEventPublisher.zip
Type: S3
Source:
Type: GITHUB
Location: https://github.com/XXXXXXX
BuildSpec: RoiCalculator.Serverless.EventPublisher/buildspec.yml
Here is subset of buildspec.yaml:
phases:
install:
runtime-versions:
dotnet: 2.2
commands:
dotnet tool install -g Amazon.Lambda.Tools
build:
commands:
- dotnet restore
- cd RoiCalculator.Serverless.EventPublisher
- dotnet lambda package --configuration release --framework netcoreapp2.1 -o .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip
- aws s3 cp .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip s3://deployment-artifacts/RoiCalculatorEventPublisher.zip
You can see the same artifact name (RoiCalculatorEventPublisher.zip) and S3 bucket (deployment-artifacts) are being used in buildspec (for generating and copying) and CloudFormation template (for Lambda function's source).
Since I'm overwriting application code in S3 bucket using same file name Lambda is using, how come Lambda is not being updated with latest code?
How do version numbers work? Is it possible to have a 'system variable' containing the name of the artifact (file name + version number) and access same 'system variable' in buildspec AND CloudFormation template?
What's the secret sauce for utilizing CloudFormation template to generate source code (via buildspec) using CodeBuild as well as update Lambda function which consumes the generated code?
Thank you.
Unfortunately, unless you change the "S3Key" on 'AWS::Lambda::Function' resource on every update, CloudFormation will not see it as a change (it will not look inside the zipped code for changes).
Options:
Option 1) Update S3 Key with every upload
Option 2) Recommended advice is to use AWS SAM to author Lambda template, then use "cloudformation package" command to package the template, which takes cares of creating a unique key for S3 and uploading the file to the bucket. Details here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Edit 1:
In response to your comment, let me add some details of SAM approach:
To use CloudFormation as a Deployment tool for your Lambda function in your Pipeline. The basic idea to deploy a Lambda function will be as follows:
1) Create a a SAM template of your Lambda function
2) A basic SAM template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
FunctionName:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: nodejs6.10
CodeUri: ./code
3) Add a directory "code" and keep the lambda code files in this directory
4) Install SAM Cli [1]
5) Run the command to package and upload:
$ sam package --template-file template.yaml --output-template packaged.yaml --s3-bucket {your_S3_bucket}
6) Deploy the package:
$ aws cloudformation deploy --template-file packaged.yaml --stack-name stk1 --capabilities CAPABILITY_IAM
You can keep the Template Code (Step1-2) in CodeCommit/Github and do the Steps4-5 in a CodeBuild Step. For Step6, I recommend to do it via a CloudFormation action in CodePipeline that is fed the "packaged.yaml" file as input artifact.
See also [2].
References:
[1] Installing the AWS SAM CLI on Linux - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
[2] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/en_us/lambda/latest/dg/build-pipeline.html
I am using aws scp instead of aws cp and never had this problem.
I am working on a project with serverless architecture with multiple lambdas, where in we have multiple folder with just a python file and requirement.txt file inside it.
Usually the directory and lambda is named the same for convenience for eg. folder email_sender would have python file as email_sender.py and a requirement.txt if it needs one.
In the code build after installing the dependencies i am just showing below how we are ziping
echo "--- Compiling lambda zip: ${d}.zip"
d=$(tr "_" "-" <<< "${d}")
zip -q -r ${d}.zip . --exclude ".gitignore" --exclude "requirements.txt" --exclude "*__pycache__/*" > /dev/null 2>&1
mv ${d}.zip ../../${CODEBUILD_SOURCE_VERSION}/${d}.zip
And while doing a copy to s3 bucket we use scp as following
aws s3 sync ${CODEBUILD_SOURCE_VERSION}/ ${S3_URI} --exclude "*" --include "*.zip" --sse aws:kms --sse-kms-key-id ${KMS_KEY_ALIAS} --content-type "binary/octet-stream" --exact-timestamps

run lambda function with input parameter in serverless framework command line

I have started a aws project with help of serverless framework, but i have one question regarding run lambda function.
How can I run lambda function with input parameters? I can do it via amazon console, lambda test configuration->test event. but I cannot find a correspondning function in serverless, does anyone know?
Thanks
For the lambda part
You can use event.json file:
{
"principalId": "1234",
"inputVar": "foo"
}
and then run sls function run.
According to docs, if don't specify any stage, function will run locally, if you do specify a stage, function will run deployed code in corresponding stage. BUT the docs seems outdated, you also need to pass -d flag like:
sls function run myFunction -s dev -d
This command will invoke your deployed lambda function, with parameters from your local event.json file
Here is the source code for function run options.
For APIG integration
There are some samples in documentation.
If you don't want to use templates, you can just insert related code in your s-function.json, inside the endpoint description.
"endpoints": [
...
"requestTemplates": {
"application/json": {
"principalId": "$context.authorizer.principalId",
"apiKey": "$context.identity.apiKey",
"inputVar": "$input.json('inputVar')"
}
}
...
]
Syntax is as described in API Gateway Accessing the $input Variable doc.

Resources