Heroku-like Amazon EC2 - heroku

Is there anything that I can install on my EC2 instances that make AWS Heroku-like?
e.g:
heroku create app
git push
But for AWS.

Well, now there is! It's called AWS Elastic Beanstalk (still in beta, as of March 2013)
After running the initial setup, further deployments should be as simple as git aws.push
EDIT: Just a nice and broad overview of deployment possibilities at AWS, by Werner Vogels (AWS CTO):

There are few topics I need to touch on before I can answer your question thoroughly; so please, bear with me.
A bit of insight
With respects to your two examples, Heroku utilizes a number of different technologies in order to achieve the level of simplicity it provides as a service platform. One of these technologies include: Heroku's proprietary toolbelt, which offers a set of command-line tools —that allows developers to interface with their applications— and an interact with many of the tools Heroku provides —such as terminal access for a number of different languages. The toolbelt itself relies on two other technologies: Ruby and Git; which come prepackaged with the install.
In a nutshell
Now, when you create a Heroku app you are effectively creating a git repository on the celedon cedar runtime stack (by default); this repository is then added as a remote repo. This allows you to immediately run git push heroku master. There is a lot more happening behind the scenes: for instance, when you push, your commits get intercepted by a git pre-receive hook which runs your app through a slug compiler and prepackages it for distribution across the dyno manifold; yet, I digress. For more information on more advance topics, check-out: https://devcenter.heroku.com/; there is a wealth of information here to read.
The stack
Now, let me explain the cedar stack as this is mainly what your question concerns. The Celedon cedar is one of many; however, this is the current default (for many reasons). This polyglot runtime stack currently provides six web languages (at the time of writing,) running on Ubuntu (11.04 stable, I belive). All of these technologies are operating on top of the AWS EC2 computing environment.
So to finally answer your question: You will need to install a suitable operating system such like: Ubuntu; a set of languages such like: Ruby, Python, Node.js, etc; Git (for deployment) and the rest is up to you.

If you have fixed number of instances make sense instead of usage Elastic Beanstalk use custom git deployment, like described in the article: http://www.jeffhoefs.com/2012/09/setup-git-deploy-for-aws-ec2-ubuntu-instance/.
Main idea to setup GIT repository on EC2 instance. When you want to deploy something just push your changes to remote repository, installed on EC2 instance.
I think this approach have next benefits in comparison with Elastic Beanstalk:
You don't pay for S3 buckets for storing application versions;
You have full control on application deployment steps.

Related

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

Kubernetes CI/CD pipeline

My company has decided to transition to a micro/service based architecture.
We have been doing a bunch of research for the last couple of months on exactly what the architecture of this thing is going to look like.
So far, we've settled on:
Dotnet core for service development (although being language agnostic is somewhat of an end goal)
Kafka for message brokering
Docker
Kubernetes
Ansible
We have a pretty basic proof of concept working, which seems to have ticked all the right boxes with the management team, and is an absolute joy to work with.
My next task is to investigate options for how the development workflow is actually going to work. They are already used to working in a CI/CD manner, with some of their newer products using Jenkins/Octopus Deploy.
My question is: Do any of you have any firm recommendations for setting up a CI/CD pipeline when deploying to a Kubernetes cluster?
A list of must haves is:
Multiple environments i.e. Integration, Test, UAT, Staging, Production.
A means through which different business units can uniquely handle deployments to different environments (development can only push to integration, tester into test, etc). This one is probably their biggest ask - they are used to working with Octopus, and they love the way it handles this.
The ability to roll back / deploy at the click of a button (or with as few steps as possible).
We would be deploying to our own servers initially.
I've spent the past couple of days looking in to options, of which there are many.
So far, Jenkins Pipeline seems like it could be a great start. Spinnakar also seems like a solid choice. I did read a bit into Fabric8, and while it offers much of what I'm asking, it seems a bit like overkill.
If you want to use Jenkins, Pipelines are indeed the way to go. Our setup does pretty much what you want, so let me explain how we set it up.
We use a Jenkins agent that has docker and kubectl installed. This agent first builds the docker container and pushes it to our docker registry. It will then call kubectl in various stages to deploy to our testing, acceptance and production clusters.
Different business units: in a Pipeline you can use an input step to ask whether the Pipeline should proceed or not. You can specify who may press the button, so this is how you could solve the deployment to different clusters. (Ideally, when you get to CD, people will realize that pressing the button several times per day is silly and they'll just automate the entire deployment.)
Rollback: we rely on Kubernetes's rollback system for this.
Credentials: we provision the different Kubernetes credentials using Ansible directly to this Jenkins agent.
To reduce code duplication, we introduced a shared Jenkins Pipeline library, so each (micro)service talks to all Kubernetes clusters in a standardized way.
Note that we use plain Jenkins, Docker and Kubernetes. There is likely tons of software to further ease this process, so let's leave that open for other answers.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins pipelines and GitOps for promotion across environments.
If you want to see how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.

Heroku Applications within AWS VPC

I have a small Rails app which I'm keen to deploy through Heroku (as I do with other clients) however this is not intended to be a publicly available application and they need to deploy it within their AWS VPC as if it is accessible within their internal network.
Is this something which is possible? I know that Heroku is built on top of EC2 but wasn't sure quite how flexible it was and haven't been able to find anything documented.
If not possible would anyone be able to offer experiences of pre-built Rails AMIs that I might be able to use in order to replicate some for the Heroku deployment simplicity without having to worry too much about configuring and managing my own infrastructure for the app.
Sounds like git-deploy might be what you need for git-style deployment.
I also found quite an interesting blog post by Giles Bowkett that might be interesting to you.

How do you run utility services on Heroku?

Heroku is fantastic for prototyping ideas and running simple web services, I often use it to run Python web services like Flask and Django and try out ideas. However I've always struggled to understand how you can use the infrastricture to run those amazingly powerful support or utility services every startup needs in its stack. 4 exmaples of services I can't live without and would recommend to any startup.
Jenkins
Statsd
Graphite
Graylog
How would you run these on Heroku? Would it be best just getting dedicated boxes (Rackspace, e.t.c) with these support services installed.
Has anyone one run utility deamons (services) on Heroku?
There are two basic options. The first is to find or create a Heroku addon to accomplish the task. For example, there are many hosted logging solutions you can use instead of Graylog; Rails on Fire or Travis can be used instead of Jenkins. If an appropriate addon doesn't exist, you can effectively make your own by just running the service on an AWS EC2 instance.
The other alternative is to push the service into being a 12factor application so that it can run on Heroku as well. For example, you could stub out whisper's filesystem calls so that they store in a backing service instead. This is often pretty painful and brittle, though, unless you can get your changes accepted by the upstream maintainers.
you could also use another free service in conjunction with it. OpenShift has a lot of Java related build services and tools that can be added.
I am using a mix of heroku, openshift, mongolab and my own web hosting. Throw in dropbox and box for some space...

What exactly is Heroku?

I just started learning Ruby on rails and I was wondering what Heroku really is? I know that its a cloud that helps us to avoid using servers? When do we actually use it?
Heroku is a cloud platform as a service. That means you do not have to worry about infrastructure; you just focus on your application.
In addition to what Jonny said, there are a few features of Heroku:
Instant Deployment with Git push - build of your application is performed by Heroku using your build scripts
Plenty of Add-on resources (applications, databases etc.)
Processes scaling - independent scaling for each component of your app without affecting functionality and performance
Isolation - each process (aka dyno) is completely isolated from each other
Full Logging and Visibility - easy access to all logging output from every component of your app and each process (dyno)
Heroku provides very well written tutorial which allows you to start in minutes. Also they provide first 750 computation hours free of charge which means you can have one processes (aka Dyno) at no cost. Also performance is very good e.g. simple web application written in node.js can handle around 60 - 70 requests per second.
Heroku competitors are:
OpenShift by Red Hat
Windows Azure
Amazon Web Services
Google App Engine
VMware
HP Cloud Services
Force.com
It's a cloud-based, scalable server solution that allows you to easily manage the deployment of your Rails (or other) applications provided you subscribe to a number of conventions (e.g. Postgres as the database, no writing to the filesystem).
Thus you can easily scale as your application grows by bettering your database and increasing the number of dynos (Rails instances) and workers.
It doesn't help you avoid using servers, you will need some understanding of server management to effectively debug problems with your platform/app combination. However, while it is comparatively expensive (i.e. per instance when compared to renting a slice on Slicehost or something), there is a free account and it's a rough trade off between whether it's more cost effective to pay someone to build your own solution or take the extra expense.
Heroku Basically provides with webspace to upload your app
If you are uploading a Rails app then you can follow this tutorial
https://github.com/mrkushjain/herokuapp
As I see it, it is a scalable administrated web hosting service, ready to grow in any sense so you don't have to worry about it.
It's not useful for a normal PHP web application, because there are plenty of web hosting services with ftp over there for a simple web without scalability needs, but if you need something bigger Heroku or something similar is what you need.
It is exposed as a service via a command line tool so you can write scripts to automate your deployments. Anyway it is pretty similar to other web hosting services with Git enabled, but Heroku makes it simpler.
That's its thing, to make the administration stuff simpler to you, so it saves you time. But I'm not sure, as I'm just starting with it!
A nice introduction of how it works in the official documentation is:
https://devcenter.heroku.com/articles/how-heroku-works
Per DZone: https://dzone.com/articles/heroku-or-amazon-web-services-which-is-best-for-your-startup
Heroku is a Platform as a Service (PaaS) product based on AWS, and is vastly different from Elastic Compute Cloud. It’s very important to differentiate ‘Infrastructure as a Service’ and ‘Platform as a Service’ solutions as we consider deploying and supporting our application using these two solutions.
Heroku is way simpler to use than AWS Elastic Compute Cloud. Perhaps it’s even too simple. But there’s a good reason for this simplicity. The Heroku platform equips us with a ready runtime environment and application servers. Plus, we benefit from seamless integration with various development instruments, a pre-installed operating system, and redundant servers.
Therefore, with Heroku, we don’t need to think about infrastructure management, unlike with AWS EC2. We only need to choose a subscription plan and change our plan when necessary.
That article does a good job explaining the differences between Heroku and AWS but it looks like you can choose other iaas (infrastructure) providers other than AWS. So ultimately Heroku seems to just simplify the process of using a cloud provider but at a cost.

Resources