How to do automated functional testing of AWS components? - hadoop

In my project we have implemented custom auto scaling module. This module takes advantage of AWS CloudWatch API and uses its custom logic to auto scale up/down the cluster. All this code is in written in Java + Shell scripts. We have written unit test cases using JUnit.
Now we want to automate the functional testing but I do not know how other people do automated functional/integration testing of AWS components and best practices for AWS component functional testing.
Consider the following scenario and expected output:
Scenario : HDFS utilization of EC2 based Hadoop cluster goes above given threshold.
Expected result : Attach new EBS volume to one of the EC2 instance in the cluster.
I would like to know which technology, language can be used to do functional testing of this scenarios.

Take a look at LocalStack. It provides an easy-to-use test/mocking framework for developing AWS-related applications by spinnin up the AWS-compatible APIs on your local machine or in Docker. It supports two dozen of AWS APIs and EC2 and CloudWatch are among them. It is really a great tool for functional testing without using a separate environment in AWS for that.
As for the language: you can use any language has AWS SDK. Even the AWS CLI works with LocalStack.
If you are a lucky user of Java/Kotlin and JUnit 5 I would like to recommend you to use aws-junit5, a set of JUnit 5 extensions for AWS. And, yes, I am it's author. These extensions can be used to inject clients for AWS services provided by tools like localstack or any other AWS-compatible API (including the real AWS, of course). Both AWS Java SDK v 2.x and v 1.x are supported. You can use aws-junit5 to inject clients for S3, DynamoDB, Kinesis, SES, SNS and SQS.

Related

Local development, events and ruby on jets

I'm trying out ruby on jets since we'd like to reuse rails code but we are deploying on AWS lately.
We'd like to have a smooth offline development experience such as the one provided by Serverless.
Has anyone managed to test events such as S3 or SNS in RubyOnJets without deploying the lambda?
I'm not familiar with RybyOnJets but it seems to be using AWS. It might be possible to use localstack:
https://github.com/localstack/localstack
to simulate AWS services on local machine.

Why we need Spotinst function when we already have Azure function, AWS Lambda, Google cloud function

We already have Azure function in Microsoft Azure, AWS Lambda in AWS, Google Cloud Function in Google.
Then What are reasons do we need to use Spotinst function?
Wil Spotinst function replicated and running on all Cloud Providers such as Azure, AWS, Google and all regions at the same time when we choose all Cloud Providers and regions.
Which Cloud Providers will have to pay for running a Spotinst function?
I'm not an expert on Spotinst, but I had a chance to chat with them at ServerlessConf NYC. It's my understanding that their value-prop is to save you money on cloud infrastructure. That's things like VMs.
You specifically mentioned Azure Functions, AWS Lambda, and Google Cloud Functions. Those are Serverless/FaaS services, which means that you as the developer don't need to think about infrastructure or VMs at all, and the consumption-based prices are already dirt-cheap. Serverless tech has it's limitations, however, which means they're not appropriate for all use-cases (for example, if you need to execute long-running code, or install special software on the VM instance that your code depends on).
In that light, Spotinst makes more sense for non-Serverless applications which need to run on cloud VMs.

SAM CLI for CI/CD other than Cloud Formation

Is it possible to use SAM CLI (or any other tool known to mankind) to deploy a lambda function with defined triggers, memory and timeout limits set, etc. the way SAM CLI is able to do it using Cloud Formation (or even in a better way)?
Currently I'm using TravisCI to deploy my lambda functions, but that's really just a better zip uploader to AWS, as I can't define any triggers for the lambda function the way I can do it through SAM (Serverless Application Model).
I would look into leveraging AWS Code pipeline, Codebuild, Code deploy for you serverless functions CI/CD. Sam also has some awesome baked in tools for leveraging code deploy under the hood to enable things like weighted roll outs canary deploys etc.
https://github.com/aws-samples/aws-safe-lambda-deployments
https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/
For specifying things like memory, triggers, timeouts this would all be done in cloudformation template as you mentioned and this is best practice.
Since asking the question I came across diferrent useful tools to deploy configured Lambda functions:
serverless framework
All-in-one development & monitoring of auto-scaling apps on AWS Lambda
AWS CDK
Define cloud infrastructure using familiar programming languages

Do I need Amazon ECS or EC2 instances when I am planning to deploy IBM Bluemix fabric images on docker?

So scenario is to create a Blockchain network that is around a use case that takes input every 2 minutes. Now I need to do this on Bluemix hyperledger v0.6 and using docker. Should I deploy it on ECS or EC2 instances as blockchain is meant for multiple peers and they should be residing on each instance?
Bluemix has an excellent documentation which provides step by step instruction on getting started with block chain here: https://www.ibm.com/developerworks/cloud/library/cl-ibm-blockchain-101-quick-start-guide-for-developers-bluemix-trs/index.html?ca=drs-
The above doc provides you information such as - setting up your network, writing the chaincode, writing client apps, monitoring your code etc. you can follow that article to get an insight on how to proceed further. Hope this helps.

SaltStack and PaaS

Is Salt suited for PaaS?
Let's say I'd like to provision a PaaS compute service, such as Amazon BeanStalk, Azure Cloud Service (web role / worker role), or even a Heroku Dyno, as part of an SaltStack state (perhaps besides a VM or a database). Each of these services contain an API and some an SDK, meaning that it should technically be possible for the master to provision the PaaS using a (Python) script.
Of course, SaltStack is primarily written for IaaS. However, is the above use case common/possible for SaltStack?
Short answer: If it has an API, Salt can talk to it.
Long answer:
There are currently no built in execution modules or states for provisioning Amazon Beanstalk, Azure Cloud Service*, or Heroku. That said, there's no reason there could not be. See, for example, the suite of boto_* execution modules and states (search for "boto_*" on http://docs.saltstack.com/en/latest/). Such state modules could be used in your state SLSs and execution modules could be called from a custom runner.
*I'm not personally familiar with the Azure platform or salt-cloud, but salt-cloud does support Azure.
Every PaaS services usually have API supports in multiple languages. Using Python for example, you can create modules to do the needful and call modules from salt states as required.

Resources