Is there a way to save & test an AWS Lambda function with a single click? Ideally, I'd like to be able to test unsaved changes but I don't see an option for this. I'm just finding it tedious to save each time I want to test out changes.
If you are creating your Lambda function via the AWS Lambda console, then you will need to Save the function before running Test. This is because the function runs on a Lambda container, not in the console.
Alternatively, you can run Lambda Local to test functions on your own computer rather than on the Lambda service. Once the code works, you can upload it to AWS.
See: Run AWS Lambda Functions Locally on a Windows Machine - DZone Cloud
How about using Endly automation runner with aws/lambda service
In this case, you would define your deployment workflow and just run it with
endly deploy
Where deploy.yaml defines automation workflow
init:
functionRole: lambda-helloworld-executor
functionName: HelloWorld
codeZip: /tmp/hello/main.zip
privilegePolicy: privilege-policy.json
pipeline:
deploy:
action: aws/lambda:deploy
credentials: aws
functionname: $functionName
runtime: go1.x
handler: helloworld
code:
zipfile: $LoadBinary(${codeZip})
rolename: lambda-helloworld-executor
define:
- policyname: my-bucket-role
policydocument: $Cat('${privilegePolicy}')
attach:
- policyarn: arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Finally, you might be in end to end testing automation here
Related
the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get
I followed the following tutorial to create a Lambda deploy pipeline using CDK. When I try to keep everything in the same account it works well.
https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
But my scenario is slightly different from the example because it involves two AWS accounts instead one. I maintain application source code and pipeline
in the OPS account and this pipeline will deploy the Lambda application to the UAT account.
OPS Account (12345678) - CodeCommit repo & CodePipeline
UAT Account (87654321) - Lambda application
As per the aws following aws documentation (Cross-account actions section) I made the following changes to source code.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html
Lambda stack expose deploy action role as follows
export class LambdaStack extends cdk.Stack {
public readonly deployActionRole: iam.Role;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
...
this.deployActionRole = new iam.Role(this, 'ActionRole', {
assumedBy: new iam.AccountPrincipal('12345678'), //pipeline account
// the role has to have a physical name set
roleName: 'DeployActionRole',
});
}
}
In the pipeline stack,
new codePipeline.Pipeline(this, 'MicroServicePipeline', {
pipelineName: 'MicroServicePipeline',
stages: [
{
stageName: 'Deploy',
actions: [
new codePipelineAction.CloudFormationCreateUpdateStackAction({
role: props.deployActionRole,
....
})
]
}
]
});
Following is how I initiate stacks
const app = new cdk.App();
const opsEnv: cdk.Environment = {account: '12345678', region: 'ap-southeast-2'};
const uatEnv: cdk.Environment = {account: '87654321', region: 'ap-southeast-2'};
const lambdaStack = new LambdaStack(app, 'LambdaStack', {env: uatEnv});
const lambdaCode = lambdaStack.lambdaCode;
const deployActionRole = lambdaStack.deployActionRole;
new MicroServicePipelineStack(app, 'MicroServicePipelineStack', {
env: opsEnv,
stackName: 'MicroServicePipelineStack',
lambdaCode,
deployActionRole
});
app.synth();
AWS credentials profiles looks liks
[profile uatadmin]
role_arn=arn:aws:iam::87654321:role/PigletUatAdminRole
source_profile=opsadmin
region=ap-southeast-2
when I run cdk diff or deploy I get an error saying,
➜ infra git:(master) ✗ cdk diff MicroServicePipelineStack --profile uatadmin
Including dependency stacks: LambdaStack
Stack LambdaStack
Need to perform AWS calls for account 87654321, but no credentials have been configured.
What have I done wrong here? Is it my CDK code or is it the way I have configured my AWS profile?
Thanks,
Kasun
The problem is with your AWS CLI configuration. You cannot use the CDK CLI natively to deploy resources in two separate accounts with one CLI command. There is a recent blog post on how to tell CDK which credentials to use, depending on the stack environment parameter:
https://aws.amazon.com/blogs/devops/cdk-credential-plugin/
The way we use it is to deploy stacks into separate accounts with multiple CLI commands specifying the required profile. All parameters that need to be exchanged (such as the location of your lambdaCode) is passed via e.g. environment variables.
Just try to use using the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Or
~/.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****
~/.aws/config
[default]
region=us-west-2
output=json
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
It works for me.
I'm using cdk version 1.57.0
The issue is in the fact that you have resources that exist in multiple accounts and hence there are different credentials required to create those resources. However, CDK does not understand natively how to get credentials for those different accounts or when to swap between the different credentials. One way to fix this is to use cdk-assume-role-credential-plugin, which will allow you to use a single CDK deploy command to deploy to many different accounts.
I wrote a detailed tutorial here: https://johntipper.org/aws-cdk-cross-account-deployments-with-cdk-pipelines-and-cdk-assume-role-credential-plugin/
I'm trying to use cx_Oracle to connect to an RDS(Oracle) database from inside an AWS Lambda function (python3.7). Moreover, the Lambda function itself is automatically built from AWS CodeBuild using a buildspec.yml file. The CodeBuild itself runs by configuring AWS CodePipeline in such a way that whenever the repository where I put my code in (in this case AWS CodeCommit) is updated, it automatically builds the stuff.
Things that I have done:
1. I have an AWS Lambda function with code as follows.
import cx_Oracle
def lambda_handler(event, context):
dsn = cx_Oracle.makedsn('www.host.com', '1521', 'dbname')
connection = cx_Oracle.connect(user='user', password='password', dsn=dsn)
cursor = connection.cursor()
cursor.execute('select * from table_name')
return cursor
Inside the buildspec.yml I have the following build commands.
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install cx_Oracle -t ./ # to install cx_Oracle package in the same directory as the script
- unzip instantclient-basic-linux*.zip -d /opt/oracle # I have downloaded the zip file beforehand
<other code>
-
I have also configured the template.yml of the Lambda function as follows
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Making a test lambda function using codepipeline
Resources:
funcAuthorityReceive:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: testFunction
Environment:
Variables:
PATH: '/opt/oracle/instantclient_19_5:$PATH'
LD_LIBRARY_PATH : '$LD_LIBRARY_PATH:/opt/oracle/instantclient_19_5'
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: 'arn:aws:iam::XXXXXXXXXXXXXX:role/role-for-lambda
Runtime: python3.7
CodeUri: ./
Here, everything runs smoothly and the Lambda function itself gets built, but when I run the lambda this error shows up:
"DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\". See https://oracle.github.io/odpi/doc/installation.html#linux for help"
Any help would be greatly appreciated.
When you want to use cx_Oracle to reach you oracle database, the moment you zip the lambda package (code and other dependencies), make sure you preserve the symlinks
zip --symlinks -r lambda.zip .
I haven't worked with codebuild, but I have build the lambda package in a linux server, soon I will be creating a build pipeline in Azure Devops.
I'm trying to create a CloudFormation template supporting Lambda Function and AWS CodeBuild project for building .netcore source code into a deployed zip file in S3 bucket.
Here are the particulars:
Using a GitHub mono-repo with multiple Lambda functions as different projects in the .netcore solution
Each Lambda function (aka .netcore project) has a CloudFormation YAML file generating a stack containing the Lambda function itself and CodeBuild project.
CodeBuild project is initiated from GitHub web hook which retrieves the code from GitHub sub-project and uses its buildspec.yaml to govern how build should happen.
buildspec uses .netcore for building project, then zips and copies output to a target S3 bucket
Lambda function points to S3 bucket for source code
This is all working just fine. What I'm struggling with is how to update Lambda function to use updated compiled source code in S3 bucket.
Here is subset of CloudFormation template:
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: roicalculator-eventpublisher
Handler: RoiCalculator.Serverless.EventPublisher::RoiCalculator.Serverless.EventPublisher.Function::FunctionHandler
Code:
S3Bucket: deployment-artifacts
S3Key: RoiCalculatorEventPublisher.zip
Runtime: dotnetcore2.1
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: RoiCalculator-EventPublisher-Master
Artifacts:
Location: deployment-artifacts
Name: RoiCalculatorEventPublisher.zip
Type: S3
Source:
Type: GITHUB
Location: https://github.com/XXXXXXX
BuildSpec: RoiCalculator.Serverless.EventPublisher/buildspec.yml
Here is subset of buildspec.yaml:
phases:
install:
runtime-versions:
dotnet: 2.2
commands:
dotnet tool install -g Amazon.Lambda.Tools
build:
commands:
- dotnet restore
- cd RoiCalculator.Serverless.EventPublisher
- dotnet lambda package --configuration release --framework netcoreapp2.1 -o .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip
- aws s3 cp .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip s3://deployment-artifacts/RoiCalculatorEventPublisher.zip
You can see the same artifact name (RoiCalculatorEventPublisher.zip) and S3 bucket (deployment-artifacts) are being used in buildspec (for generating and copying) and CloudFormation template (for Lambda function's source).
Since I'm overwriting application code in S3 bucket using same file name Lambda is using, how come Lambda is not being updated with latest code?
How do version numbers work? Is it possible to have a 'system variable' containing the name of the artifact (file name + version number) and access same 'system variable' in buildspec AND CloudFormation template?
What's the secret sauce for utilizing CloudFormation template to generate source code (via buildspec) using CodeBuild as well as update Lambda function which consumes the generated code?
Thank you.
Unfortunately, unless you change the "S3Key" on 'AWS::Lambda::Function' resource on every update, CloudFormation will not see it as a change (it will not look inside the zipped code for changes).
Options:
Option 1) Update S3 Key with every upload
Option 2) Recommended advice is to use AWS SAM to author Lambda template, then use "cloudformation package" command to package the template, which takes cares of creating a unique key for S3 and uploading the file to the bucket. Details here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Edit 1:
In response to your comment, let me add some details of SAM approach:
To use CloudFormation as a Deployment tool for your Lambda function in your Pipeline. The basic idea to deploy a Lambda function will be as follows:
1) Create a a SAM template of your Lambda function
2) A basic SAM template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
FunctionName:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: nodejs6.10
CodeUri: ./code
3) Add a directory "code" and keep the lambda code files in this directory
4) Install SAM Cli [1]
5) Run the command to package and upload:
$ sam package --template-file template.yaml --output-template packaged.yaml --s3-bucket {your_S3_bucket}
6) Deploy the package:
$ aws cloudformation deploy --template-file packaged.yaml --stack-name stk1 --capabilities CAPABILITY_IAM
You can keep the Template Code (Step1-2) in CodeCommit/Github and do the Steps4-5 in a CodeBuild Step. For Step6, I recommend to do it via a CloudFormation action in CodePipeline that is fed the "packaged.yaml" file as input artifact.
See also [2].
References:
[1] Installing the AWS SAM CLI on Linux - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
[2] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/en_us/lambda/latest/dg/build-pipeline.html
I am using aws scp instead of aws cp and never had this problem.
I am working on a project with serverless architecture with multiple lambdas, where in we have multiple folder with just a python file and requirement.txt file inside it.
Usually the directory and lambda is named the same for convenience for eg. folder email_sender would have python file as email_sender.py and a requirement.txt if it needs one.
In the code build after installing the dependencies i am just showing below how we are ziping
echo "--- Compiling lambda zip: ${d}.zip"
d=$(tr "_" "-" <<< "${d}")
zip -q -r ${d}.zip . --exclude ".gitignore" --exclude "requirements.txt" --exclude "*__pycache__/*" > /dev/null 2>&1
mv ${d}.zip ../../${CODEBUILD_SOURCE_VERSION}/${d}.zip
And while doing a copy to s3 bucket we use scp as following
aws s3 sync ${CODEBUILD_SOURCE_VERSION}/ ${S3_URI} --exclude "*" --include "*.zip" --sse aws:kms --sse-kms-key-id ${KMS_KEY_ALIAS} --content-type "binary/octet-stream" --exact-timestamps
This question is in relation to a cloudformation template which tries to create lambda functions. The template is in codecommit and uses codepipeline to create tha lambda. But I am struggling to specify the "code" property. The actual code for the lambda function is in my codecommit repo. Below is the example on AWS documentation. But below code appears to take the code from a S3 bucket. Do I specify the file name? if so in what format, thank you.
AMIIDLookup:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "LambdaExecutionRole"
- "Arn"
Code:
S3Bucket: "lambda-functions"
S3Key: "amilookup.zip"
Runtime: "nodejs8.10"
Timeout: 25
TracingConfig:
Mode: "Active"
Further info - Here is my cloudformation template- which is pushed to the codecommit repo. Templete and the pipeline work perfectly with inline code. But I do not know how to specify the code to be taken from the file in the code commit repo. E.g. if the code is in a file - ./abc/index.js
Resources:
LFVQS1:
Type: 'AWS::Lambda::Function'
Properties:
Handler: 'index.function_name**'
Role: 'arn:aws:iam::561731601292:role/service-role/mailfwd-role-m5rl5tu3'
Runtime: "nodejs8.10"
Code: {
ZipFile: "exports.wrtiteToConsole = function (event, context, callback){ console.log('Hello'); callback(null); }" }
If you're asking in the context of CodePipeline (based on the tags), you can either use the ParameterOverrides configuration property of the CloudFormation action to reference the CodePipeline artifact (stored in S3) or use the S3 publish action and reference the location in your CloudFormation template.
CloudFormation action reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-action-reference.html