SaltStack and PaaS - heroku

Is Salt suited for PaaS?
Let's say I'd like to provision a PaaS compute service, such as Amazon BeanStalk, Azure Cloud Service (web role / worker role), or even a Heroku Dyno, as part of an SaltStack state (perhaps besides a VM or a database). Each of these services contain an API and some an SDK, meaning that it should technically be possible for the master to provision the PaaS using a (Python) script.
Of course, SaltStack is primarily written for IaaS. However, is the above use case common/possible for SaltStack?

Short answer: If it has an API, Salt can talk to it.
Long answer:
There are currently no built in execution modules or states for provisioning Amazon Beanstalk, Azure Cloud Service*, or Heroku. That said, there's no reason there could not be. See, for example, the suite of boto_* execution modules and states (search for "boto_*" on http://docs.saltstack.com/en/latest/). Such state modules could be used in your state SLSs and execution modules could be called from a custom runner.
*I'm not personally familiar with the Azure platform or salt-cloud, but salt-cloud does support Azure.

Every PaaS services usually have API supports in multiple languages. Using Python for example, you can create modules to do the needful and call modules from salt states as required.

Related

Azure Resource Manager: The Future of Cloud Services

I am currently working heavily in Azure. I am actually quite fond of ARM (Azure Resource Manager) right now and would love to keep using it. Right now in the old portal, We have a lot of resources tied up as Cloud Services. Now, I know cloud services are available in the new portal, but it seems that Microsoft is moving away from the classic cloud service model. Can someone explain if this is true? If so, what will the new model look like? I already use resources groups to manage Websites (WebApps), so I assume this is where the azure future lies. Will we see the "deprecation" of cloud services on down the line?
I am trying to understand if I need to begin re-structuring my Azure Infrastructure.
Any insight, explanation, or documentation is greatly appreciated.
So there are two things here - Cloud Services and managemenet of Cloud Services.
When you manage Cloud Services in current portal the underlying mechanism used is Azure Service Management (ASM) where as it is Azure Resource Manager (ARM) in the preview portal. To me, ARM is the new way of managing your Cloud resources in Azure (including Cloud Services).
I don't work for Microsoft so I would not know if Cloud Services themselves will be deprecated down the road or not but one thing I think will happen is that ASM will be deprecated in favor of ARM. At some point of time, the only option you will be left with managing your cloud resources will be through Azure Resource Manager. One example that makes me believe this thing is the presence of Classic resource providers (e.g. Classic Storage Resource Provider which enables you to manage storage accounts created in current portal via ASM in the preview portal which works exclusively on ARM).
Personally I can't see a place for cloud services in the new ARM world of Azure. I have always found them a convoluted concept that simply added complexity to a deployment.
In the ARM view of deployments servers are collected together in a VNet, and each server is attached to a Nic which in turn can be connected to the internet. A security group then takes care of ingress / egress rules.
This is a much cleaner deployment method, as it puts connectivity configuration at the server layer instead of mapping them all through a higher layer of abstraction.
I don't see the place of cloud services in ARM, however after a quick search it seems that there is a plan to implement it
Still no direction from the Azure Advisers group other than officially they will not drop support for Cloud Services. I think they are nearing giving us some kind of direction but I can't say anymore than that.
I asked a question about the future of Cloud Services on the recent Azure Compute AMA.
You can read the answers directly on Reddit for all details, below are a few interesting quotes (emphasis mine).
On ARM Integration for Cloud Services:
We are looking at ways to make the transition to ARM easier for Cloud Service customers- one of those options includes CS integration in ARM. This investigation is in the very early stages though, so if you are looking for a solution soon, check out VMSS/ACS/SF/Web Apps (meagan-msft)
And:
I think it's safe to say that if we make any significant investment in CS in the near future, it would be ARM integration, and as Meagan suggests, that's still in planning. Beyond that, there are no major feature improvements on the horizon. We believe the platform is pretty mature at this point. (seanmichaelmckenna)
So it doesn't look like any major innovations will hit Cloud Services soon, however:
Cloud Services are not going anywhere. In fact, many Microsoft services run on Cloud Services, so we heavily rely on them as well. They are fully supported, so feel free to continue to use them.
(meagan-msft)
For those who want to switch to a different Compute service, these recommendations were made:
However, if you would like to check out other services that are integrated with ARM today, we recommend checking out the following:
Web Apps for customers who want a fully managed platform and are building traditional web applications
Service Fabric for customers who want an opinionated application platform and managed infrastructure, but still need some control over the IAAS layer
VM Scale Sets for customers who need IaaS-level control with easy scaling, autoscale and load balancer integration
Azure Container service was also listed as a potential alternative.
Some things to consider (my understanding):
Service Fabric currently (2017) requires at least 5 VM instances, except for dev/test purposes. So probably only an option for larger services
VM Scale Sets is an IaaS offering, i.e. you have to manage OS updates etc. yourself. However, support for automatic OS updates is being worked on.

How to do automated functional testing of AWS components?

In my project we have implemented custom auto scaling module. This module takes advantage of AWS CloudWatch API and uses its custom logic to auto scale up/down the cluster. All this code is in written in Java + Shell scripts. We have written unit test cases using JUnit.
Now we want to automate the functional testing but I do not know how other people do automated functional/integration testing of AWS components and best practices for AWS component functional testing.
Consider the following scenario and expected output:
Scenario : HDFS utilization of EC2 based Hadoop cluster goes above given threshold.
Expected result : Attach new EBS volume to one of the EC2 instance in the cluster.
I would like to know which technology, language can be used to do functional testing of this scenarios.
Take a look at LocalStack. It provides an easy-to-use test/mocking framework for developing AWS-related applications by spinnin up the AWS-compatible APIs on your local machine or in Docker. It supports two dozen of AWS APIs and EC2 and CloudWatch are among them. It is really a great tool for functional testing without using a separate environment in AWS for that.
As for the language: you can use any language has AWS SDK. Even the AWS CLI works with LocalStack.
If you are a lucky user of Java/Kotlin and JUnit 5 I would like to recommend you to use aws-junit5, a set of JUnit 5 extensions for AWS. And, yes, I am it's author. These extensions can be used to inject clients for AWS services provided by tools like localstack or any other AWS-compatible API (including the real AWS, of course). Both AWS Java SDK v 2.x and v 1.x are supported. You can use aws-junit5 to inject clients for S3, DynamoDB, Kinesis, SES, SNS and SQS.

How to install/configure new software in newly created amazon instances using Amazon SDK in java?

My team is developing an application which will enable end users to easily create, configure and destroy amazon instances without having to use Amazon SDKs themselves. The process at our end comprises of 3 steps.
1. Create / Destroy VMs in the amazon cloud using Amazon SDK (Done)
2. Configure/Install new software in the newly created instance.
3. Track usage/command and control.
We are currently in the second step. I just realized that Amazon SDK does not provide APIs for installing new software in the remote machine. I am not talking about AmazonCloudFormation APIs because those APIs are used to create and manage AWS resources rather a software like, say, a browser.
Has anyone installed new software in an amazon instance? If yes, did you use one of a)Amazon SDK, b) Any third party APIs and c) custom solution?
Also, is it even possible to install new software in an amazon instance through java code?
The Amazon API primarily controls infrastructure. It does not have any control as to what happens inside the instance.
There are a couple of ways you can bootstrap your instance and install software. You can use user-data to pass a script that will run on first launch. You could use a provisioning system like chef or puppet. You could roll your own if it works better for you.
What you are describing sounds a lot like a Platform-as-a-Service (PaaS).
A PaaS would allow you to submit an application to the PaaS and let it start the machines and set up your software on them. A PaaS would also give you additional features like monitoring, cross-cloud support and updating the application on the fly.
There are a several PaaS vendors mentioned here: Looking for Paas Recommendations
Disclaimer: I work for Cloudify, an open-source PaaS.

What exactly is Heroku?

I just started learning Ruby on rails and I was wondering what Heroku really is? I know that its a cloud that helps us to avoid using servers? When do we actually use it?
Heroku is a cloud platform as a service. That means you do not have to worry about infrastructure; you just focus on your application.
In addition to what Jonny said, there are a few features of Heroku:
Instant Deployment with Git push - build of your application is performed by Heroku using your build scripts
Plenty of Add-on resources (applications, databases etc.)
Processes scaling - independent scaling for each component of your app without affecting functionality and performance
Isolation - each process (aka dyno) is completely isolated from each other
Full Logging and Visibility - easy access to all logging output from every component of your app and each process (dyno)
Heroku provides very well written tutorial which allows you to start in minutes. Also they provide first 750 computation hours free of charge which means you can have one processes (aka Dyno) at no cost. Also performance is very good e.g. simple web application written in node.js can handle around 60 - 70 requests per second.
Heroku competitors are:
OpenShift by Red Hat
Windows Azure
Amazon Web Services
Google App Engine
VMware
HP Cloud Services
Force.com
It's a cloud-based, scalable server solution that allows you to easily manage the deployment of your Rails (or other) applications provided you subscribe to a number of conventions (e.g. Postgres as the database, no writing to the filesystem).
Thus you can easily scale as your application grows by bettering your database and increasing the number of dynos (Rails instances) and workers.
It doesn't help you avoid using servers, you will need some understanding of server management to effectively debug problems with your platform/app combination. However, while it is comparatively expensive (i.e. per instance when compared to renting a slice on Slicehost or something), there is a free account and it's a rough trade off between whether it's more cost effective to pay someone to build your own solution or take the extra expense.
Heroku Basically provides with webspace to upload your app
If you are uploading a Rails app then you can follow this tutorial
https://github.com/mrkushjain/herokuapp
As I see it, it is a scalable administrated web hosting service, ready to grow in any sense so you don't have to worry about it.
It's not useful for a normal PHP web application, because there are plenty of web hosting services with ftp over there for a simple web without scalability needs, but if you need something bigger Heroku or something similar is what you need.
It is exposed as a service via a command line tool so you can write scripts to automate your deployments. Anyway it is pretty similar to other web hosting services with Git enabled, but Heroku makes it simpler.
That's its thing, to make the administration stuff simpler to you, so it saves you time. But I'm not sure, as I'm just starting with it!
A nice introduction of how it works in the official documentation is:
https://devcenter.heroku.com/articles/how-heroku-works
Per DZone: https://dzone.com/articles/heroku-or-amazon-web-services-which-is-best-for-your-startup
Heroku is a Platform as a Service (PaaS) product based on AWS, and is vastly different from Elastic Compute Cloud. It’s very important to differentiate ‘Infrastructure as a Service’ and ‘Platform as a Service’ solutions as we consider deploying and supporting our application using these two solutions.
Heroku is way simpler to use than AWS Elastic Compute Cloud. Perhaps it’s even too simple. But there’s a good reason for this simplicity. The Heroku platform equips us with a ready runtime environment and application servers. Plus, we benefit from seamless integration with various development instruments, a pre-installed operating system, and redundant servers.
Therefore, with Heroku, we don’t need to think about infrastructure management, unlike with AWS EC2. We only need to choose a subscription plan and change our plan when necessary.
That article does a good job explaining the differences between Heroku and AWS but it looks like you can choose other iaas (infrastructure) providers other than AWS. So ultimately Heroku seems to just simplify the process of using a cloud provider but at a cost.

library/development platform on EC2/Rackspace/Eucalyptus/OpenStack

I am trying to build a cloud VM brokering service which can borrow computer power as VM's on-demand, from the private/public cloud computer infrastructure. I have following goals for my service.
Abstract out vendor specific API in to a library which will give flexibility to choose any of the vendors (eg. EC2, rackspace) VM's with out affecting my service built on top of the library.
Also I should have flexibility to borrow VM's from a pure private cloud infrastructure built using stacks like OpenStack/Eucalyptus. Due to huge upfront Capex we will be using public clouds but we plan to move to private cloud infrastructure. So from design perspective we want to hide those details transparent to brokering service.
My question is if there are any open-source/commercial libraries or cloud development platforms, which can give me this functionality over which I can just build my service without really bothering about vendor specific details.
I came across rightscale & scalr but I am not clear if they are tools or platform. I need a platform over which I can develop not just to tools to monitor and auto provision cloud deployments.
TIA.
For python there's boto and libcloud.
For Java there's jclouds and also a port of libcloud (scroll a bit further down the page).
These are all open source libraries.
Yes, there is! It's a ruby library called fog. It's the only library I have found which gives you a vendor agnostic interface to various cloud providers.
For an Openstack cloud (RackSpace and may be some other in future) you should consider using the following python libraries:
novaclient - client library for OpenStack Compute API
nova-adminclient - client for administering Openstack Nova
You will be able to write recipes to provision control and play with your VMs in an Openstack cloud.
Hope it helps. Let me know if you need any more help in this regard.

Resources