AWS EC2 - Web App on Multiple EC2 with load Balancer - amazon-ec2

Currently I have setup my web application on my 2 instance (ec2). Both instance have same web module and also SSL certificate.
And then I also have setup 1 load balancer for both instance for high availability.
But I was wondering on domain name part. Because both instance have different IP, and right now I only assign 1 IP instance into our domain provider.
So basically do I need provide both instance IP into my domain provider? Sorry I was newbie on this domain part :(

wasabiz, since you are a newbie, I would suggest going with AWS Beanstalk path.
To answer your question.
You can use Route53 to create/import your domain Name. The domain can point to DNS name of the Loadbalancer. From there LB will route the traffic to your EC2 instances. You need to introduce the autoscaling layer and move the EC2 instances inside it. So that the auto scalaing requirements can be fulfilled. All these options are configurable in AWS BeanStalk.
You have options to generate TSL certificates in AWS which is free to be used in AWS infrastructure. Otherwise, if you already have a certificate, you can import the certificate into AWS through AWS Certificate Manager and use it where ever needed, eg:load balancer.

Related

SSL in EC2 without ELB

I am running a Spring Boot application in EC2. I want to make the API calls as HTTPS instead of HTTP.
This is what I did:
Brought a domain in godaddy and configured it in Route 53.
Created a cert from AWS certificate manager.
Created a Load Balancer and added the cert.
In route 53 directed my traffic to ELB.
Above things are working fine now. I have only one instance. Use of ELB is only for SSL. But I want to get rid of ELB as it is costing me more
Is there any other way I can make the API calls as HTTPS for spring boot application running on ec2 without ELB?
ELB can be quite expensive, and most of all useless if you have only one instance.
Try to put CloudFront in....well...front of your instance. You get the benefit of managing AWS certificates in the same way you are doing with the LB, and also you can take advantage of caching and edge locations.
You can also redirect Route53 to CloudFront, just add a CNAME to your hosted zone that reference the cloud front DNS.

Terraform setup tips: TLS communication across VPCs

I'm working for a client that has a simple enough problem:
They have EC2s in two different Regions/VPCs that are hosting microservices. Up to this point all EC2s only needed to communicate with EC2 instances that were in the same subnet, but now we need to provision our infrastructure so that specific ec2s in VPC A's public subnet can call specific ec2s in VPC B's public subnet (and vice versa). Communications would be calling restful APIs over over HTTPS/TLS 2.0
This is nothing revolutionary but IT moves slowly and I want to create a Terraform proof of concept that:
Creates two VPCs
Creates a public subnet in each
Creates an EC2 in each
Installs httpd in the EC2 along with a Cert to use SSL/TLS
Creates the proper security groups so that only IPs associated with the specific instance can call the relevant service
There is no containerization at this client, just individual EC2s for each app with 1 or 2 backups to distribute the load. I'm working with terraform so I can submit different ideas to them for consideration, such as using VPC Peering, Elastic IPs, NAT Gateways, etc.
I can see how to use Terraform to make these infrastructural changes, but I'm not sure how to create EC2s that install a server that can use a temp cert to demonstrate HTTPS traffic. I see a tech called Packer, but was also thinking I should just create a custom AMI that does this.
What would the best solution be? This doesn't have to be production-ready so I'm favoring creating a fast stable proof-of-concept.
I would use the EC2 user_data option in Terraform to install httpd and create your SSL cert. Packer is great if you want to create AMIs to spin up, but since this is an POC and you are not doing any complex configuration that would take long to perform, I would just use user_data.

ec2 cli api not usable within vpc?

I have some instances with an EC2 VPC (using only ip addresses from RFC 1918) that need to use some services of EC2 via CLI interface (ec2-describe-instances, ec2-run-instances, etc)
I can't get it to work : my understanding is that the service point of the CLI interface is located somewhere in AWS cloud and my requests originating from an RFC1918 address are not routable in the AWS cloud between EC2 service point and my instance.
Is that correct ?
Is my only solution to install a NAT instance within my VPC (I would like to avoid it) ? Or could I get a way to remap this Ec2 service point within my VPC on a RFC1918 address
Any help welcome !
Thanks in advance
didier
You can give the instance an elastic IP address and get outbound access to other publicIPs, like the EC2 API endpoint. Make sure your security group doesn't allow any inbound traffic from the Internet.
Alternatively, if you don't want to use an EIP, you can launch an instance in a VPC with a publicIP address. more here: http://aws.typepad.com/aws/2013/08/additional-ip-address-flexibility-in-the-virtual-private-cloud.html

Client communication with Amazon EC2 instance

Can an Amazon EC2 instance process requests from and return results to an external client which may a browser or non-browser application? (I know that the EC2 instance will require a IP address and must be able to create a socket and bind to a port in order to do this.)
I'm considering an Amazon EC2 instance because the server application is not written in PHP, Ruby or any other language that conventional web hosting services support by default.
Sure it will. Just setup the security group the right way to allow your clients to connect.
Take a look at this guide: Amazon Elastic Compute Cloud - Security Groups
Also keep in mind: It's not possible to change the policy group after you created the EC2 instance. This feature is available for VPC instances only. See http://aws.amazon.com/vpc/faqs/#S2 for more information.

Amazon EC2 autoscaling instances with elastic IPs

Is there any way to make new instances added to an autoscaling group associate with an elastic IP? I have a use case where the instances in my autoscale group need to be whitelisted on remote servers, so they need to have predictable IPs.
I realize there are ways to do this programmatically using the API, but I'm wondering if there's any other way. It seems like CloudFormation may be able to do this.
You can associate an Elastic IP to ASG instances using manual or scripted API calls just as you would any other instance -- however, there is no automated way to do this. ASG instances are designed to be ephemeral/disposable, and Elastic IP association goes against this philosophy.
To solve your problem re: whitelisting, you have a few options:
If the system that requires predictable source IPs is on EC2 and under your control, you can disable IP restrictions and use EC2 security groups to secure traffic instead
If the system is not under your control, you can set up a proxy server with an Elastic IP and have your ASG instances use the proxy for outbound traffic
You can use http://aws.amazon.com/vpc/ to gain complete control over instance addressing, including network egress IPs -- though this can be time consuming
There are 3 approaches I could find to doing this. Cloud Formation will just automate it but you need to understand what's going on first.
1.-As #gabrtv mentioned use VPC, this lends itself to two options.
1.1-Within a VPC use a NAT Gateway to route all traffic in and out of the Gateway. The Gateway will have an Elastic IP and internet traffic then whitelist the NAT Gateway on your server side. Look for NAT gateway on AWS documentation.
1.2-Create a Virtual Private Gateway/VPN connection to your backend servers in your datacenter and route traffic through that.
1.2.a-Create your instances within a DEDICATED private subnet.
1.2.b-Whitelist the entire subnet on your side, any request from that subnet will be allowed in.
1.2.c Make sure your routes in the Subnet are correct.
(I'm skipping 2 on purpose since that is 1.2)
3.-The LAZY way:
Utilize AWS Opsworks to do two things:
1st: Allocate a RESOURCE Pool of Elastic IPs.
2nd: Start LOAD instances on demand and AUTO assign them one elastic ip from the Pool.
For the second part you will need to have the 24/7 instances be your minimum and the Load instances be your MAX. AWS Opsworks now allows Cloud Watch alarms to trigger instance startup so it is very similar to ASG.
The only disadvantage of Opsworks is that instances aren't terminated but stopped instead when the load goes down and that you must "create" instances beforehand. Also you depend on Chef solo to initiate your instances but is the only way to get auto assigning EIPs to your newly created instances that I could find.
Cheers!

Resources