SAM CLI for CI/CD other than Cloud Formation - aws-lambda

Is it possible to use SAM CLI (or any other tool known to mankind) to deploy a lambda function with defined triggers, memory and timeout limits set, etc. the way SAM CLI is able to do it using Cloud Formation (or even in a better way)?
Currently I'm using TravisCI to deploy my lambda functions, but that's really just a better zip uploader to AWS, as I can't define any triggers for the lambda function the way I can do it through SAM (Serverless Application Model).

I would look into leveraging AWS Code pipeline, Codebuild, Code deploy for you serverless functions CI/CD. Sam also has some awesome baked in tools for leveraging code deploy under the hood to enable things like weighted roll outs canary deploys etc.
https://github.com/aws-samples/aws-safe-lambda-deployments
https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/
For specifying things like memory, triggers, timeouts this would all be done in cloudformation template as you mentioned and this is best practice.

Since asking the question I came across diferrent useful tools to deploy configured Lambda functions:
serverless framework
All-in-one development & monitoring of auto-scaling apps on AWS Lambda
AWS CDK
Define cloud infrastructure using familiar programming languages

Related

Difference between CloudFormation stack and Serverless application (published to a repository)

with private applications in mind, I'm struggling to understand the difference between CloudFormation stacks and Serverless Application published to a repository.
I have a SAM template with a couple of functions. I then build, package and deploy it with SAM CLI.
At this point I have a CF stack and and I can call my functions using boto3 python lib.
lambda_client.invoke(
FunctionName="MyFunctionName",
Payload=bytes(json.dumps(test_event), encoding='utf8'),
Qualifier="live"
)
What us the purpose of publish command (which publishes to Serverless Application Repository)?
If I published my application as private, how would I call my functions via that application. Seems to me that executing the functions would still be done the same way as without publishing it.
Publishing means your function could be find and used by other parties.
You can use the AWS SAM CLI to publish your application to the AWS
Serverless Application Repository to make it available for others to
find and deploy.
Example: Share applications across your teams and organizations to reduce duplicated development efforts and promote consistency and best practices.
If you use your function only without your account and you don't plan to share it is not required to publish it.

React with net core backend. Can I use only amplify for CI/CD

I am setting up a react app with a fully serverless backend in AWS. Looking into aws amplify, it looks great and simplifying tons of things. I see it also can support your backend deployment, however, as to my knowledge so far, only if its nodejs/javascript.
As I have a requirement for using .net, I am confused if this can be achieved via amplify.
I.e my final goal is:
frontend: react
backend: .net lambdas connected to various queues and events
(and this is the main confusion point) All being managed by amplify cli, creating and keeping in sync the cloud formation stack, connected to github, and serving as a CI/CD pipeline.
An alternative, as I see it would be to start 'backwards' from visual studio, defining all the backend lambdas needed and then adding a static frontetd.
But amplify CLI is so great, I was wondering if there is some way.
Any help I will appreciate. Thank you!
After a week of RnD, and after finding out about CDK, chosen the following strategy:
stage 1
1) Use Amplify to provision needed Auth / AppSync / Dynamo services
2) Use VS Studio with AWS SKD plugin to provision needed Lambda / SQS / SNS / S3
stage 2
Create a CDK project, that will programmatically provision all infrastructure (in OPP manner), including CI/CD
Hope this helped someone :)

Why we need Spotinst function when we already have Azure function, AWS Lambda, Google cloud function

We already have Azure function in Microsoft Azure, AWS Lambda in AWS, Google Cloud Function in Google.
Then What are reasons do we need to use Spotinst function?
Wil Spotinst function replicated and running on all Cloud Providers such as Azure, AWS, Google and all regions at the same time when we choose all Cloud Providers and regions.
Which Cloud Providers will have to pay for running a Spotinst function?
I'm not an expert on Spotinst, but I had a chance to chat with them at ServerlessConf NYC. It's my understanding that their value-prop is to save you money on cloud infrastructure. That's things like VMs.
You specifically mentioned Azure Functions, AWS Lambda, and Google Cloud Functions. Those are Serverless/FaaS services, which means that you as the developer don't need to think about infrastructure or VMs at all, and the consumption-based prices are already dirt-cheap. Serverless tech has it's limitations, however, which means they're not appropriate for all use-cases (for example, if you need to execute long-running code, or install special software on the VM instance that your code depends on).
In that light, Spotinst makes more sense for non-Serverless applications which need to run on cloud VMs.

SaltStack and PaaS

Is Salt suited for PaaS?
Let's say I'd like to provision a PaaS compute service, such as Amazon BeanStalk, Azure Cloud Service (web role / worker role), or even a Heroku Dyno, as part of an SaltStack state (perhaps besides a VM or a database). Each of these services contain an API and some an SDK, meaning that it should technically be possible for the master to provision the PaaS using a (Python) script.
Of course, SaltStack is primarily written for IaaS. However, is the above use case common/possible for SaltStack?
Short answer: If it has an API, Salt can talk to it.
Long answer:
There are currently no built in execution modules or states for provisioning Amazon Beanstalk, Azure Cloud Service*, or Heroku. That said, there's no reason there could not be. See, for example, the suite of boto_* execution modules and states (search for "boto_*" on http://docs.saltstack.com/en/latest/). Such state modules could be used in your state SLSs and execution modules could be called from a custom runner.
*I'm not personally familiar with the Azure platform or salt-cloud, but salt-cloud does support Azure.
Every PaaS services usually have API supports in multiple languages. Using Python for example, you can create modules to do the needful and call modules from salt states as required.

How to do automated functional testing of AWS components?

In my project we have implemented custom auto scaling module. This module takes advantage of AWS CloudWatch API and uses its custom logic to auto scale up/down the cluster. All this code is in written in Java + Shell scripts. We have written unit test cases using JUnit.
Now we want to automate the functional testing but I do not know how other people do automated functional/integration testing of AWS components and best practices for AWS component functional testing.
Consider the following scenario and expected output:
Scenario : HDFS utilization of EC2 based Hadoop cluster goes above given threshold.
Expected result : Attach new EBS volume to one of the EC2 instance in the cluster.
I would like to know which technology, language can be used to do functional testing of this scenarios.
Take a look at LocalStack. It provides an easy-to-use test/mocking framework for developing AWS-related applications by spinnin up the AWS-compatible APIs on your local machine or in Docker. It supports two dozen of AWS APIs and EC2 and CloudWatch are among them. It is really a great tool for functional testing without using a separate environment in AWS for that.
As for the language: you can use any language has AWS SDK. Even the AWS CLI works with LocalStack.
If you are a lucky user of Java/Kotlin and JUnit 5 I would like to recommend you to use aws-junit5, a set of JUnit 5 extensions for AWS. And, yes, I am it's author. These extensions can be used to inject clients for AWS services provided by tools like localstack or any other AWS-compatible API (including the real AWS, of course). Both AWS Java SDK v 2.x and v 1.x are supported. You can use aws-junit5 to inject clients for S3, DynamoDB, Kinesis, SES, SNS and SQS.

Resources