How to limit access to Amazon EC2 to IP ranges - amazon-ec2

I have an Amazon EC2 instance that hosts different services (cassandra db, elasticsearch, rabbitmq, mysql...) used by several developers at different locations. Since these developers have dynamic IP addresses, and this EC2 instance is used only for development, I left inbound access to required ports opened to 0.0.0.0. I'm aware that this is absolutely not recommended, and I should limit access, but I don't want to change the rules every day as someone's IP address change.
However, I just got report from Amazon that my instance is used for DoS attack, so I would like to fix this.
My question is if it is possible to make a rule that will limit access to several ranges such as:
94.187.128.0 - 94.187.255.255
147.91.0.0 - 147.91.255.255

Definitely yes, because the ranges you meant aren't just ranges but match CIDR.
The range which cannot be expressed as CIDR won't be accepted:
You can use IPcalc or similar site to make it easier.
If it fits you, you can use port range like 2000-3000, or, better, use custom ports for the services. Then the range will be e.g. 2000-2001, and using port ranges you can fit one user into one rule.
Alternative, more secure but more difficult way: a web page, user connects there with proper security key. If the key is recognized then a script on the server adds rule to a group using the client's IP. Another script by cron deletes the rules older than X hours. To check it deeper you may want to look e.g. here: On apache side check Two-way SSL authentication, on AWS side check API and Command Overview

Related

Is it possible to block countries IP using the security group on an EC2 instance?

Is it possible to block an entire country from access my website within a security group rule in an Amazon EC2 instance instead of using iptables or something else?
As the others commented, it is hard to block the traffic from particular countries, if someone is smart enough to use a proxy.
But you can use some simple ways to filter most traffic from a range of IPs (not all customers know to use proxy)
One is to set Network ACL in aws. Please go though aws document Network ACLs as a start.
Another way if you can manage route 53 for your website, enable geolocation route policy and transfer the traffic from some countries to a fake website. You can go through the document here Choosing a Routing Policy

Automatically assign Elastic IP from a pool of IPs to auto scalling instance

I am trying my hand at autoscalling and all is well except that I need all of my instances to be assigned an elastic ip (this is for my payment gateway which needs to know all IPs that we are using.)
Im happy to add say 8 elastic ips to my account but what I need is a facility to auto assign one of these to the instance as it boots up and then release it as it switches off.
I guess I need a startup script but this is beyond my knowledge of AWS (so far I do everything through the web console).
Any samples/help appreciated!
If your gateway is deployed in the same Amazon account as your servers, you might want to look at a VPC solution where you can control the instances' private IPs using masks.
If that is not an option, you will need to write a script, which you should add to the Launch Configuration's User Data.
In this script you can use AWS CLI to find which IP Addresses are available using describe-addresses, and use one of them to associate to your newly created instance using associate-address.

How to refer to other ec2 instances? Is Elastic IP the only feasible solution?

Initially my issue was "How do I RDP into an EC2 instance without having to first find its ip address". To solve that I wrote a script that executes periodically on each instance. The script reads a particular tag value and updates the corresponding entry in Route53 with the public dns name of the instance.
This way I can always rdp into web-01.ec2.mydomain.com and be connected to the right instance.
As I continued with setting up my instances, I realized to setup mongodb replication, I will need to somehow refer to three separated instances. I cannot use the internal private ip addresses as they keep changing (or are prone to change on instance stop/start & when the dhcp lease expires).
Trying to access web-01.ec2.mydomain.com from within my EC2 instance returns the internal ip address of the instance. Which seems to be standard behaviour. Thus by mentioning the route53 cnames for my three instances, I can ensure that they can always be discovered by each other. I wouldn't be paying any extra data transfer charges, as the cnames will always resolve to internal ip. I would however be paying for all those route53 queries.
I can run my script every 30 secs or even lesser to ensure that the dns entries are as uptodate as possible.
At this point, I realized that what I have in place is very much an Elastic IP alternative. Maybe not completely, but surely for all my use cases. So I am wondering, whether to use Elastic IP or not. There is no charge involved as long as my instances are running. It does seem an easier option.
What do most people do? If someone with experience with this could reply, I would appreciate that.
Secondly, what happens in those few seconds/minutes during which the instance loses its current private ip and gets a new internal ip. Am assuming all existing connections get dropped. Does that affect the ELB health checks (A ping every 30 secs)? Am assuming if I were using an Elastic IP, the dns name would immediately resolve to the new ip, as opposed to say after my script executes. Assuming my script runs every 30 secs, will there be only 30secs of downtime, or can there possibly be more? Will an Elastic ip always perform better than my scripted solution?
According to the official AWS documentation a "private IP address is associated exclusively with the instance for its lifetime and is only returned to Amazon EC2 when the instance is stopped or terminated. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped.". Therefore checking nevertheless every 30s if something changed seems inherently wrong. This leaves you with two obvious options:
Update the DNS once at/after boot time
Use an elastic IP and static DNS
Used elastic IPs don't cost you anything, and even parked ones cost only little. If your instances are mostly up, use an elastic IP. If they are mostly down, go the boot time update route. If your instance sits in a VPC, not even the boot time update is strictly needed (but in a VPC you probably have different needs and a more complex network setup anyways).
Another option that you could consider is to use a software defined datacenter solution such as Amazon VPC or Ravello Systems (disclaimer: our company).
Using such a solution will allow you to create a walled off private environment in the public cloud. Inside the environment you have full control, including your own private L2 network on which you manage IP addressing and can use e.g. statically allocated IPs. Communications with the outside (e.g. your app servers) happens via the IPs and ports that you configure.

Amazon EC2 - seeing files between instances

I've set up 2 instances of Windows Server 2008 on EC2. I want one to act as the database server and the other as the client. For the client app to work it needs to be able to connect to the server instance with ALL of these things:
IP address of the database instance
access through a given UDP port
server name e.g. \\MyServer
an actual physical path through to its database e.g. \\UNC\SharedFolder\MyDatabaseFolder
I'm a complete novice with EC2. Is there anyway I can set this up?
Many thanks
At least three of the four are completely possible and I have worked with similar setups. Maybe someone else knows more about the UDP bit.
IP address of the database instance
That is standard on EC2. All instances have two network interfaces, one EC2 internal and one to the outside world. For communication between instances use the internal one. Data traffic over these interfaces is free.
Access through a given UDP port
I have never tried UDP communication in EC2, but if it works you should probably keep it within a local network of your own, i.e. a virtual private cloud (VPC).
Server name e.g. \MyServer
This kind of host name lookup does not need a name server, although you certainly could run one (preferably within a VPC). If you put the server name and (internal) IP into your hosts file (%systemroot%\system32\drivers\etc\hosts) you don't need a name server, though.
An actual physical path through to its database e.g. \UNC\SharedFolder\MyDatabaseFolder
Folder sharing should work the same as with any other Windows machine, but even that should probably be kept within a VPC.
Setting up a VPC can be a little steep to start with, but the documentation is good and the hard bits are often not needed (such as VPN tunnels). Have a look at the example scenarios and follow the one best matching your needs.

Should I use Amazon's AWS Virtual Private Cloud (VPC) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Currently moving to Amazon EC2 from another VPS provider. We have your typical web server / database server needs. Web servers in front of our database servers. Database servers are not directly accessible from the Internet.
I am wondering if there is any reason to put these servers into an AWS Virtual Private Cloud (VPC) instead of just creating the instances and using security groups to firewall them off.
We are not doing anything fancy just a typical web app.
Any reason to use a VPC or not using a VPC?
Thanks.
NOTE: New accounts in AWS launch with a "default VPC" enabled immediately, and make "EC2-Classic" unavailable. As such, this question and answer makes less sense now than they did in August 2012. I'm leaving the answer as-is because it helps frame differences between "EC2-Classic" and the VPC product line. Please see Amazon's FAQ for more details.
Yes. If you're security conscious, a heavy CloudFormation user, or want complete control over autoscaling (as opposed to Beanstalk, which abstracts certain facets of it but still gives you complete access to the scaling parameters), use a VPC. This blog post does a great job summarizing both the pros and cons. Some highlights from the blog post (written by kiip.me):
What’s Wrong with EC2?
All nodes are internet addressable. This doesn’t make much sense for nodes which have no reason to exist on the global internet. For example: a database node should not have any public internet hostname/IP.
All nodes are on a shared network, and are addressable to each other. That means an EC2 node launched by a user “Bob” can access any of EC2 nodes launched by a user “Fred.” Note that by default, the security groups disallow this, but its quite easy to undo this protection, especially when using custom security groups.
No public vs private interface. Even if you wanted to disable all traffic on the public hostname, you can’t. At the network interface level each EC2 instance only has one network interface. Public hostnames and Elastic IPs are routed onto the “private” network.
What's Great About the VPC
First and foremost, VPC provides an incredible amount of security compared to EC2. Nodes launched within a VPC aren’t addressable via the global internet, by EC2, or by any other VPC. This doesn’t mean you can forget about security, but it provides a much saner starting point versus EC2. Additionally, it makes firewall rules much easier, since private nodes can simply say “allow any traffic from our private network.” Our time from launching a node to having a fully running web server has gone from 20 minutes down to around 5 minutes, solely due to the time saved in avoiding propagating firewall changes around.
DHCP option sets let you specify the domain name, DNS servers, NTP servers, etc. that new nodes will use when they’re launched within the VPC. This makes implementing custom DNS much easier. In EC2 you have to spin up a new node, modify DNS configuration, then restart networking services in order to gain the same effect. We run our own DNS server at Kiip for internal node resolution, and DHCP option sets make that painless (it just makes much more sense to type east-web-001 into your browser instead of 10.101.84.22).
And finally, VPC simply provides a much more realistic server environment. While VPC is a unique product to AWS and appears to “lock you in” to AWS, the model that VPC takes is more akin to if you decided to start running your own dedicated hardware. Having this knowledge beforehand and building up the real world experience surrounding it will be invaluable in case you need to move to your own hardware.
The post also lists some difficulties with the VPC, all of which more or less relate to routing: Getting an internet gateway or NAT instance out of the VPC, communicating between VPCs, setting up a VPN to your datacenter. These can be quite frustrating at times, and the learning curve isn't trivial. All the same, the security advantages alone are probably worth the move, and Amazon support (if you're willing to pay for it) is extremely helpful when it comes to VPC configuration.
Currently VPC has some useful advantages over EC2, such as:
multiple NICs per instance
multiple IP's per NIC
'deny'-rules in security-groups
DHCP options
predictable internal IP ranges
moving NICs and internal IPs between instances
VPN
Presumably Amazon will upgrade EC2 with some of those features as well, but currently they're VPC-only.
VPCs are useful if your app needs to access servers outside of EC2, e.g. if you have a common service that's hosted in your own physical data center and not accessible via the internet. If you're going to put all of your web and DB servers on EC2, there's no reason to use VPC.
Right now VPC is the only way to have internal load balancers
If you choose RDS to provide your database services, you can configure DB Security Groups to allow database connections from a given EC2 Security Groups, then even if you have dynamic IP addresses in your EC2 cluster, the RDS will automatically create the firewall rules to allow connections only from your instances, reducing the benefit of a VPS in this case.
VPS in the other hand is great when your EC2 instances have to access your local network, then you can establish a VPN connection between your VPS and your local network, controlling the IP range, sub networks, routes and outgoing firewall rules, which I think is not what you are looking for.
I would also highly recommend trying the Elastic Beanstalk, which will provide a console that makes easy to setup your EC2 cluster for PHP, Java and .Net applications, enabling Auto Scaling, Elastic Load Balancer and Automatic Application Versioning, allowing easy rollback from bad deployments.
You have raised a good concern here.
I would like to focus on the viability in terms of cost...
What about the cost factor?
I think You will be paying for that server per hour. Even if you pick $20-$50 dollars a month instance it will be something you will pay the rest of your server life. The VPN server is something you can easily set on old hardware very cheap and even free for open source solution.
Adding VPN to existing AWS servers park make sense, setting a solo VPN server on AWS doesn't. I don't think it is the best cost-effective option but that just my opinion.
Thanks,
Alisa

Resources