Aws Lambda Deployment via CodePipeline - aws-lambda

I would like to deploy my Lambda methods by using Aws Codepipeline. However, when i follow Aws Codepipeline creation wizard, i couldn't understand which one should i choose at beta stage. Because, not only Aws Codedeploy, but also Elastic Beanstalk are concerning only EC2 instances. There is lack of tutorial about telling step by step to create pipeline for our lambda, apigateway deployments. How can i skip beta stage without choosing one of them?, or which one should i choose for my serverless architecture's deployments?.

There are no direct integrations for Lambda/API Gateway -> CodePipeline at the moment. You could certainly do something with Jenkins like #arjabbar suggested. Thanks for the feedback, we'll take this on our backlog.

CloudFormation is available in CodePipeline now. This allows you to target cloudformation templates as Actions in the CodePipeline.
Here's an overview (the implementation was moved to a private repository after I changed positions):
https://aws.amazon.com/blogs/compute/continuous-deployment-for-serverless-applications/
In this pipeline we deploy a staging lambda, test its functionality, then deploy the production lambda.

Related

How to execute some script on the ec2 instance from the bitbucket-pipelines.yml?

I've bitbucket repository, bitbucket pipeline there and EC2 instance. EC2 have access to the repository ( can perform pull and docker build/run)
So it seems I only need to upload to EC2 some bash scripts and call it from bitbucket pipeline. How can I call it? Usually ssh connection is used to perform scripts on EC2, is it applicable from bitbucket pipeline? Is it a good solution?
two ways to solve this problem, I will leave it up to you.
I see you are using AWS, and AWS has a nice service called CodeDeploy. you can use that and create a few deployment scripts and then integrate it with your pipeline. Problem with it is that it is an agent that needs to be installed. so it will consume some resource not much but if u are looking at an agentless design then this solution wont work. you can check the example in the following answer https://stackoverflow.com/a/68933031/8248700
You can use something like Python Fabric (its a small gun) or Ansible (its a big canon) to achieve this. it is an agentless design works purely on SSH.
I'm using both the approaches for different scenarios. For AWS I use CodeDeploy and for any other cloud vendor I use Python Fabric. (We can use CodeDeploy on other than AWS but then it comes under on-premise pricing for which it charges for per deployment)
I hope this brings some clarity.

What are aws lambda artifacts and why do I need to create a S3 bucket to store them?

As per the official AWS Lambda tutorial repository for nodejs, step #1 consists of creating a S3 bucket for lambda artifacts.
I cannot find however any definition on what exactly artifacts are. Also, I cannot find why I am supposed to store these on S3 buckets.
Apparently, the Lambda tutorial for nodejs makes use of AWS Cloudformation.
The AWS Cloudformation yml file is considered a byproduct of the development process and therefore, considered an artifact. This yml template file is what needs to be stored in S3 bucket for later rebuilds etc.

How to expose ARN of resource created by the Serverless Framework to Terraform

We are planning to use the Serverless framework to create AWS lambda functions, and Terraform to provision other infrastructure in AWS. We use SSM parameters to get access in Serverless to resources created by Terraform.
However, I am wondering: is there any way to access in Terraform resources created by Serverless? The use case is as follows: in terraform we need to give explicit bucket permissions for a lambda created in Serverless. At the moment we need to hard code the ARN of the lambda. Is there any way to avoid that?
Serverless uses CloudFormation under the hood and there is no direct way to share data between CloudFormation and terraform. Most common way is to use SSM to share data in both directions, so I think you are on the right track (you can use data source to fetch value from SSM).
There is an overview how to use Serverless framework with terraform in this blog post

Setup Github actions to build and deploy java code on EC2

I have been trying to make Github actions to setup CI/CD for aws ec2 machine. I tried working with aws actions i.e. https://github.com/aws-actions but it only talks about deploying to ECS or by using Codebuild.
Is it possible to do this with Github actions alone? Steps which I am trying to do are:
1) build with maven
2) Deploy to ec2
I am new to this so any pointers would be helful.

AWS Cloud9 and Lambda Alias

I am trying to understand how AWS Cloud9 works with AWS Lambda alias and version system.
When deploying a lambda from Cloud9, does it always deploy on $LATEST ?
When importing a lambda in Cloud9, does it always import $LATEST ?
Can we choose versions ?
Can we choose alias ?
If this is somewhere is the doc, sorry, I just can't find it.
The Lambda section of the AWS Resources window in the AWS Cloud9 IDE currently does not provide any features for working with Lambda function versions or aliases. Instead, you can use the terminal in the IDE to run the AWS CLI and AWS SAM CLI with the corresponding commands, actions, and arguments. For details, see the following:
Introduction to AWS Lambda Versioning in the AWS Lambda Developer Guide
Introduction to AWS Lambda Aliases in the AWS Lambda Developer Guide
Managing Versioning Using the AWS Management Console, the AWS CLI, or Lambda API Operations in the AWS Lambda Developer Guide

Resources