Deploy application on AWS VPC - ruby

I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.

This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.

Related

Terraform setup tips: TLS communication across VPCs

I'm working for a client that has a simple enough problem:
They have EC2s in two different Regions/VPCs that are hosting microservices. Up to this point all EC2s only needed to communicate with EC2 instances that were in the same subnet, but now we need to provision our infrastructure so that specific ec2s in VPC A's public subnet can call specific ec2s in VPC B's public subnet (and vice versa). Communications would be calling restful APIs over over HTTPS/TLS 2.0
This is nothing revolutionary but IT moves slowly and I want to create a Terraform proof of concept that:
Creates two VPCs
Creates a public subnet in each
Creates an EC2 in each
Installs httpd in the EC2 along with a Cert to use SSL/TLS
Creates the proper security groups so that only IPs associated with the specific instance can call the relevant service
There is no containerization at this client, just individual EC2s for each app with 1 or 2 backups to distribute the load. I'm working with terraform so I can submit different ideas to them for consideration, such as using VPC Peering, Elastic IPs, NAT Gateways, etc.
I can see how to use Terraform to make these infrastructural changes, but I'm not sure how to create EC2s that install a server that can use a temp cert to demonstrate HTTPS traffic. I see a tech called Packer, but was also thinking I should just create a custom AMI that does this.
What would the best solution be? This doesn't have to be production-ready so I'm favoring creating a fast stable proof-of-concept.
I would use the EC2 user_data option in Terraform to install httpd and create your SSL cert. Packer is great if you want to create AMIs to spin up, but since this is an POC and you are not doing any complex configuration that would take long to perform, I would just use user_data.

Is it possible to connect to database hosted in local machine through AWS lambda

I launched one RDS instance,s3 and EC2 in AWS and its is triggered properly using lambda. Now I wish to change the change the RDS and EC2 from AWS to local machine. My lambda is triggered from s3.
How do I connect the local database through lambda in AWS?
It appears that your requirement is:
You wish to run an AWS Lambda function
Within the function, you wish to connect to a database running on your own computer (outside of AWS)
Firstly, I would not recommend this strategy. To maintain good performance, you should always have an application as close as possible to the database. This means on the same network, in the same location and not going across remote network connections or the Internet.
However, if you wish to do this, then here's some things you would need to do:
Your database will need to be accessible on the Internet, so that you can connect to it remotely. To test this, try accessing it from an Amazon EC2 instance.
The AWS Lambda function should either be configured without VPC connectivity (which means that it is connected to the Internet) or, if you have configured it for VPC connectivity, it needs to be in a Private Subnet with a NAT Gateway enabling Internet access.
(Optional) For added security, you could lock-down your database to only accept connections from a known IP address. To achieve this, you would need to use the VPC + NAT Gateway so that all traffic is coming from the Elastic IP address assigned to the NAT Gateway.
I agree with John Rotenstein that connecting your local machine to a Lambda running on AWS is probably a bad idea.
If your intention is to develop or test locally, I recommend the serverless framework, and the serverless-offline plugin. It will allow you to simulate Lambda locally, and you can pass database config values through as environment variables.
See: Running AWS Lambda and API Gateway locally: serverless-offline

Client communication with Amazon EC2 instance

Can an Amazon EC2 instance process requests from and return results to an external client which may a browser or non-browser application? (I know that the EC2 instance will require a IP address and must be able to create a socket and bind to a port in order to do this.)
I'm considering an Amazon EC2 instance because the server application is not written in PHP, Ruby or any other language that conventional web hosting services support by default.
Sure it will. Just setup the security group the right way to allow your clients to connect.
Take a look at this guide: Amazon Elastic Compute Cloud - Security Groups
Also keep in mind: It's not possible to change the policy group after you created the EC2 instance. This feature is available for VPC instances only. See http://aws.amazon.com/vpc/faqs/#S2 for more information.

How do I connect up my Amazon EC2 instances without manually modifying config files?

I have a three-tier Windows-based web application bundled into 3 AMIs on Amazon EC2 that I use for load testing.
An ASP.NET web application on IIS
An .NET application server
SQL Server
After I launch them, the config files of each tier needs modifying to update the IP addresses.
At the moment I am doing this manually: I connect to the webserver instance via remote desktop and modify the config file to point to the new IP of the application server instance. Then I do the same with the application server to change the IP in the connection string.
This must be a common requirement and I must be missing something obvious. There must be a better way!
I could use Elastic IP addresses, but these machines are only provisioned for a couple of hours at a time, and I would be charged for the addresses when they were NOT in use (which would be most of the time).
Is there some way of persistently naming the machines? Can I somehow get all the machines on the same network and use machine names instead of IP addresses?
I could write some nifty PowerShell script that would perform the modifications remotely. Is there an example somewhere?
I could use a dynamic IP address service. I'm not sure if this would have any negative effect on performance or availability... Are there any downsides to this approach?
I could install some sort of self-configuring service on each machine (which connects to S3? SNS? SimpleDB?) to publish/retrieve the addresses of the other machines and update the config files automatically. Is there an example somewhere?
What is best practice?
You could use Amazon Virtual Private Cloud (Amazon VPC). You have a private subnet where you can assign an IP address to an instance, but it may require launching an instance from command line to assign IP. VPC is charged the same way as EC2.

Amazon VPC testing

I sell a product that runs on Amazon EC2. A company now wants to purchase and install it within their perimeter... This also implies the use of a VPN connection to the EC2 datacenter.
I want to test my product using Amazon VPN (VPC) before handing over the code. Must I change my code to make it work across VPC? If I run on Windows, then wants the quickest and easiest desktop VPN client avaialable that will allow me to connect across VPN to the Amazon datacenter?
Make sure you setup NAT servers and set your routes in the AWS console. Your client can have some security infrastructure for extending their data center to the cloud - firewall rules at the VPC level etc. Disable firewall rules on the server you deploy to since your VPC already takes care of this. As root execute the following command. service iptables stop (you probably already know this I am guessing)
Is it important for your app to run across VPCs?
Depending on how large the company you are selling to is, their security team may give them the run around to have VPC to VPC communication. Is it important for your software to span across VPCs?

Resources