Determine Deployment Group from appspec.yml - bash

I am using the elb scripts from https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb to remove my ec2 instances from the load balancer before I do my code updates.
I need to define the load balancer in the ELB_LIST variable of the common_functions.sh bash script. This load balancer will be different for each environment (or deployment group).
Is there a way I can set this variable based on which deployment group I am deploying too from within this bash script?
The application artifacts will be the same, but deployed to different environments or groups and hence, different load balancers.

Well then, after searching the forums on aws, I see they now support deployment specific environment variables.
So I can reference the deployment group from within bash and set the load balancer:
if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ]
then
ELB_LIST="STAGING-ELB"
fi
RE http://blogs.aws.amazon.com/application-management/post/Tx1PX2XMPLYPULD/Using-CodeDeploy-Environment-Variables

Related

Shell script to automate the checklist of the AWS environment

I have created an environment in AWS. The environment has networking (VPC), EC2 instances, RDS (MySQL), Redis, ALB, S3, etc.
Now I want to have a shell script (bash) that will show the
EC2 instances - instance types, IPs, termination protection, etc.
Networking - VPC and subnet CIDRs, DNS hostnames, DNS hostnames - enable or disable
S3- Details like policy, bucket name, Default encryption, Default encryption, Replication rules, etc.
RDS - ARN, end points, reader and writer instances, version, etc.
Redis - version, node type, shards, total nodes, etc.
ALB - DNS name, listeners, etc.
and need to have all these in a file as output.
Note: I have to give only the AWS account number, region, and tags as input.
FYI, the above input values have to be taken from JSON or any CSV file.
Can you please help me?
I tried some scripts, but they were not able to work properly.
Currently, I am manually updating and checking everything.
Note: I have this environment that got created through Terraform that contains networking, bastion, the backend, a worker node, RDS, S3, and ALB. Now I want to validate these all as part of a checklist through automation. that I require in the form of a shell script with PASS or FAIL.
For these stuff IAC (Infrastructure As Code) tools such as Terraform are invented.
You can write down the specifics for your cloud resources (such as s3, lambda etc.) and can manage version, config, backend based on your environment settings.
Here are some common aws services written in terraform you can look as reference to start with terraform.
We use terraform.env.tfvars to pass environment specific variables. And automate the whole thing using some bash scripts. The reference repo is actually a project from which you can get ideas of how it's done.
Best wishes.

Continuous deployment & AWS autoscaling using Ansible (+Docker ?)

My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
A hybrid solution may give you the desired result. Store the head docker image in S3, prebake the AMI with a simple fetch and run script on start (or pass it into a stock AMI with user-data). Version control by moving the head image to your latest stable version, you could probably also implement test stacks of new versions by making the fetch script smart enough to identify which docker version to fetch based on instance tags which are configurable at instance launch.
You can also use AWS CodeDeploy with AutoScaling and your build server. We use CodeDeploy plugin for Jenkins.
This setup allows you to:
perform your build in Jenkins
upload to S3 bucket
deploy to all the EC2s one by one which are part of the assigned AWS Auto-Scaling group.
All that with a push of a button!
Here is the AWS tutorial: Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy

How to access beanstalk environment URL from EC2

My web application deployed via Elastic Beanstalk also reads from an SQS queue. As part of my blue/green deployment approach, I'd prefer only the environment actively serving production HTTP requests to pull messages from the queue. My original thought is to have the app periodically check the URL of the Elastic Beanstalk environment into which it is deployed and only read from SQS if the URL matches a certain pattern (indicating it is the current "production" environment).
How, from an app running on an Elastic Beanstalk deployed EC2 instance, can I determine its environment URL? (Or is there a better way to accomplish this goal?)
A better approach would be to look for an environment variable that you can control via the Elastic Beanstalk console. If the value of your environment variable is something like "production", your app should do production-y things.

amazon EC2 load balanced - how to deploy web app?

We're looking to move to amazon cloud using EC2 and RDS.
I'm looking at load balancing, which I would like to do, two servers, each in a different availability zone to protect against downtime.
My question is how to deploy web applications and updates to them? I assume there is a better way than individually updating the files on each EC2 server?
In systems past, I have used the vcs puppet module to ensure that the appropriate source code is installed on my system, in addition to using puppet to build the configuration files for the apache/nginx server that I'm using. Another possibility is to push your application in a deployable state (if you're not using a scripting language) to Amazon S3, and have your run-time scripts pull the latest build from your S3 bucket.

How to sync my EC2 instance when autoscaling

When autoscaling my EC2 instances for application, what is the best way to keep every instances in sync?
For example, there are custom settings and application files like below...
Apache httpd.conf
php.ini
PHP source for my application
To get my autoscaling working, all of these must be configured same in each EC2 instances, and I want to know the best practice to sync these elements.
You could use a private AMI which contains scripts that install software or checkout the code from SVN, etc.. The second possibility to use a deployment framework like chef or puppet.
The way this works with Amazon EC2 is that you can pass user-data to each instance -- generally a script of some sort to run commands, e.g. for bootstrapping. As far as I can see CreateLaunchConfiguration allows you to define that as well.
If running this yourself is too much of an obstacle, I'd recommend a service like:
scalarium
rightscale
scalr (also opensource)
They all offer some form of scaling.
HTH

Resources