AWS EC2 instance in public subnet cannot talk to outside world - amazon-ec2

I have a fairly simple architecture with only two subnets: Public and private. In the same Public Subnet with Internet Gateway configured, I have two EC2 instances:
Linux EC2 instance (Where I run a REST API)
OpenVPN Access Server
https://i.stack.imgur.com/2MHco.png
The problem is, from the Linux EC2 instance, I cannot
ping for example cnn.com
aws ecr docker login (To pull docker images)
Python scripts sitting on the Linux EC2 instance also need call REST APIs from outside world
Thru trial and errors, I found out if I add a Inbound Rule to allow all traffic from 0.0.0.0/0, then I can ping and do those aws/docker commands. This approach is of course a security hole and less than ideal. Any suggestion please?
Thanks in advance.

Related

Setup VPN to connect VPC to home network?

I'm not clear if this is possible, but here is what I'd like to do:
Goal:
Set up a VPN between my home network and my AWS VPC. A use case I'd like to have working:
Have a Lambda function write to a database, e.g. Postgres running on my home network behind my router. Think of some machine with 192.168.. address on my home network running Postgres
I have read the documentation and I wanted to confirm what it would require to make this happen. Assume I have a VPC with a Lambda deployed to it.
Create a Virtual Private Gateway for the VPC
Create a Customer Gateway for my home network.
Configure the Customer Gateway machine in my home network (e.g. Raspberry PI) after downloading the vpn connection file from AWS.
I'm looking at this article for reference:
setup raspberry PI3 as AWS VPN Customer Gateway
Is this all that I would need to do? Do I need to use some 3rd party software in addition to this? Or is this not even possible?
Thanks
You can setup an OpenVPN server on an EC2 instance and change your SG inside your VPC resources to only allow access from your VPC CIDR block.
AWS provide an AMI for OpenVPN server : https://aws.amazon.com/marketplace/pp/B00MI40CAE/ref=mkt_wir_openvpn_byol

Terraform setup tips: TLS communication across VPCs

I'm working for a client that has a simple enough problem:
They have EC2s in two different Regions/VPCs that are hosting microservices. Up to this point all EC2s only needed to communicate with EC2 instances that were in the same subnet, but now we need to provision our infrastructure so that specific ec2s in VPC A's public subnet can call specific ec2s in VPC B's public subnet (and vice versa). Communications would be calling restful APIs over over HTTPS/TLS 2.0
This is nothing revolutionary but IT moves slowly and I want to create a Terraform proof of concept that:
Creates two VPCs
Creates a public subnet in each
Creates an EC2 in each
Installs httpd in the EC2 along with a Cert to use SSL/TLS
Creates the proper security groups so that only IPs associated with the specific instance can call the relevant service
There is no containerization at this client, just individual EC2s for each app with 1 or 2 backups to distribute the load. I'm working with terraform so I can submit different ideas to them for consideration, such as using VPC Peering, Elastic IPs, NAT Gateways, etc.
I can see how to use Terraform to make these infrastructural changes, but I'm not sure how to create EC2s that install a server that can use a temp cert to demonstrate HTTPS traffic. I see a tech called Packer, but was also thinking I should just create a custom AMI that does this.
What would the best solution be? This doesn't have to be production-ready so I'm favoring creating a fast stable proof-of-concept.
I would use the EC2 user_data option in Terraform to install httpd and create your SSL cert. Packer is great if you want to create AMIs to spin up, but since this is an POC and you are not doing any complex configuration that would take long to perform, I would just use user_data.

Forward Traffic from Windows EC2 Instance to ElasticSearch VPC Endpoint

I have Windows EC2 instance I use for my public-facing C# API. The VPC(and related Internet Gateway, subnets, etc) are all default.
I've now setup an AWS ElasticSearch service using their more secure VPC Endpoint option (instead of public-facing) and I've associated it to the same subnet and vpc as my above Windows EC2 instance.
I'd like to get them to talk to each other.
Reading from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html
It seems what you'd do is ssh tunnel / port forward traffic from localhost:9200 on the EC2 instance to the actual Elastic Search service (via that VPC endpoint).
It seems this command is where the magic happens:
ssh -i ~/.ssh/your-key.pem ec2-user#your-ec2-instance-public-ip -N -L 9200:vpc-your-amazon-es-domain.region.es.amazonaws.com:443
but that is for a Linux EC2 instance.
If I am Remote Desktopped into my Windows EC2 instance (the API), how can I make it so when I go to a browser, http://localhost:9200
will send traffic to my VPC Endpoint:
vpc-your-amazon-es-domain.region.es.amazonaws.com:443
Thanks!
Alright, so I'll answer my two questions:
First, it's actually quite easy, just RDP to your box and access the instance directly via the VPC endpoint. You don't need to do anything wacky like port forwarding using the netsh command or anything like that. Simply make sure the server (in my case my API) is on the same VPC and you're fine. I just had an error in my connection string that's why it didn't connect. To confirm, I RDP'D in and was able to hit the endpoint directly in a browser on port 80. While it's true the actual Elasticsearch runs on port 9200, you don't need to forward to localhost:9200 --> vpc:9200.
Now, regarding the second question, about hitting it locally, I think the problem is that because this service lacks a public IP address and you can't access it, that you can go through some complicated setup on AWS, or easier is just set it up to run locally for now until you are ready to use the VPC one (and thus your code will just run). Another option is to use security groups and make a publicly accessible cluster for now, and then when your code is done, search service/layer done, etc, you can start anew with a VPC/secure Elasticsearch service and that should be it.
Another thing that many mention is that it is cheaper/you have more control of things if you setup your own Elasticsearch on your local machine, and then set one up on EC2 (this is just reading blogs and seeing people mention how much frustration they had with it).

Deploy application on AWS VPC

I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.

ec2 cli api not usable within vpc?

I have some instances with an EC2 VPC (using only ip addresses from RFC 1918) that need to use some services of EC2 via CLI interface (ec2-describe-instances, ec2-run-instances, etc)
I can't get it to work : my understanding is that the service point of the CLI interface is located somewhere in AWS cloud and my requests originating from an RFC1918 address are not routable in the AWS cloud between EC2 service point and my instance.
Is that correct ?
Is my only solution to install a NAT instance within my VPC (I would like to avoid it) ? Or could I get a way to remap this Ec2 service point within my VPC on a RFC1918 address
Any help welcome !
Thanks in advance
didier
You can give the instance an elastic IP address and get outbound access to other publicIPs, like the EC2 API endpoint. Make sure your security group doesn't allow any inbound traffic from the Internet.
Alternatively, if you don't want to use an EIP, you can launch an instance in a VPC with a publicIP address. more here: http://aws.typepad.com/aws/2013/08/additional-ip-address-flexibility-in-the-virtual-private-cloud.html

Resources