Amazon EC2 - seeing files between instances - amazon-ec2

I've set up 2 instances of Windows Server 2008 on EC2. I want one to act as the database server and the other as the client. For the client app to work it needs to be able to connect to the server instance with ALL of these things:
IP address of the database instance
access through a given UDP port
server name e.g. \\MyServer
an actual physical path through to its database e.g. \\UNC\SharedFolder\MyDatabaseFolder
I'm a complete novice with EC2. Is there anyway I can set this up?
Many thanks

At least three of the four are completely possible and I have worked with similar setups. Maybe someone else knows more about the UDP bit.
IP address of the database instance
That is standard on EC2. All instances have two network interfaces, one EC2 internal and one to the outside world. For communication between instances use the internal one. Data traffic over these interfaces is free.
Access through a given UDP port
I have never tried UDP communication in EC2, but if it works you should probably keep it within a local network of your own, i.e. a virtual private cloud (VPC).
Server name e.g. \MyServer
This kind of host name lookup does not need a name server, although you certainly could run one (preferably within a VPC). If you put the server name and (internal) IP into your hosts file (%systemroot%\system32\drivers\etc\hosts) you don't need a name server, though.
An actual physical path through to its database e.g. \UNC\SharedFolder\MyDatabaseFolder
Folder sharing should work the same as with any other Windows machine, but even that should probably be kept within a VPC.
Setting up a VPC can be a little steep to start with, but the documentation is good and the hard bits are often not needed (such as VPN tunnels). Have a look at the example scenarios and follow the one best matching your needs.

Related

How to use Ansible to update a client machine without having its IP address

Ansbile can be used to update a machine via ssh, and in order to establish such connection, you need an accessible IP address.
How to use it to update a fleet of distributed machines on different networks (consumers) which don't have a public address?
One solution I was thinking of, is to reverse the procedure, have Ansible configured on the client machine, that connects each day to the server, read a file to see if it has a new update for it and loads that update.
I find this not straightforward, is there another way?
ansible-pull is exacly what you are looking for.

Best way to deploy multiple preconfigured VMs to AWS

I'm just looking for advice, I can do most of my own research, but I'm not sure where to start. Here's the situation:
I want to be able to deploy 3 vms that have 2 nic's a piece. 1 nic will have a standard IP that AWS provides. The second nic will have a pre-configured internal static IP. Let's say, 192.168.0.100, 101, and 102. That way each vm can talk to each other automatically without needing to know what the external IP is. The purpose of this is so that I can have a small cluster already configured and I won't have to do a lot of work every time I deploy the cluster.
I want this to be repeatable. Let's say I want this for a classroom. Each student has the identical set of clustered VMs. All they need to do is power them on and start working.
So, I think I can do this with Terreform. I don't know if AWS has it's own tooling that can do this also. If it has, I haven't been able to find it yet.
Any suggestions would be greatly appreciated!
In general, every VM gets a private IP, if the VM is public you can assign a public IP which makes the VM accessible from external and also provides internet access, this is be done by source/destination NAT.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
As long as they are part of you VPC CIDR and available you can specify the IP addresses on instance launch. This can be done via AWS Console, API, CLI, CloudFormation and also with Terraform. The AWS native tools for doing it at scale / repeatable is CloudFormation or maybe also a script that runs AWS CLI commands would work.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/opsworks/create-instance.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-network-interface-privateipspec.html

How to limit access to Amazon EC2 to IP ranges

I have an Amazon EC2 instance that hosts different services (cassandra db, elasticsearch, rabbitmq, mysql...) used by several developers at different locations. Since these developers have dynamic IP addresses, and this EC2 instance is used only for development, I left inbound access to required ports opened to 0.0.0.0. I'm aware that this is absolutely not recommended, and I should limit access, but I don't want to change the rules every day as someone's IP address change.
However, I just got report from Amazon that my instance is used for DoS attack, so I would like to fix this.
My question is if it is possible to make a rule that will limit access to several ranges such as:
94.187.128.0 - 94.187.255.255
147.91.0.0 - 147.91.255.255
Definitely yes, because the ranges you meant aren't just ranges but match CIDR.
The range which cannot be expressed as CIDR won't be accepted:
You can use IPcalc or similar site to make it easier.
If it fits you, you can use port range like 2000-3000, or, better, use custom ports for the services. Then the range will be e.g. 2000-2001, and using port ranges you can fit one user into one rule.
Alternative, more secure but more difficult way: a web page, user connects there with proper security key. If the key is recognized then a script on the server adds rule to a group using the client's IP. Another script by cron deletes the rules older than X hours. To check it deeper you may want to look e.g. here: On apache side check Two-way SSL authentication, on AWS side check API and Command Overview

How to refer to other ec2 instances? Is Elastic IP the only feasible solution?

Initially my issue was "How do I RDP into an EC2 instance without having to first find its ip address". To solve that I wrote a script that executes periodically on each instance. The script reads a particular tag value and updates the corresponding entry in Route53 with the public dns name of the instance.
This way I can always rdp into web-01.ec2.mydomain.com and be connected to the right instance.
As I continued with setting up my instances, I realized to setup mongodb replication, I will need to somehow refer to three separated instances. I cannot use the internal private ip addresses as they keep changing (or are prone to change on instance stop/start & when the dhcp lease expires).
Trying to access web-01.ec2.mydomain.com from within my EC2 instance returns the internal ip address of the instance. Which seems to be standard behaviour. Thus by mentioning the route53 cnames for my three instances, I can ensure that they can always be discovered by each other. I wouldn't be paying any extra data transfer charges, as the cnames will always resolve to internal ip. I would however be paying for all those route53 queries.
I can run my script every 30 secs or even lesser to ensure that the dns entries are as uptodate as possible.
At this point, I realized that what I have in place is very much an Elastic IP alternative. Maybe not completely, but surely for all my use cases. So I am wondering, whether to use Elastic IP or not. There is no charge involved as long as my instances are running. It does seem an easier option.
What do most people do? If someone with experience with this could reply, I would appreciate that.
Secondly, what happens in those few seconds/minutes during which the instance loses its current private ip and gets a new internal ip. Am assuming all existing connections get dropped. Does that affect the ELB health checks (A ping every 30 secs)? Am assuming if I were using an Elastic IP, the dns name would immediately resolve to the new ip, as opposed to say after my script executes. Assuming my script runs every 30 secs, will there be only 30secs of downtime, or can there possibly be more? Will an Elastic ip always perform better than my scripted solution?
According to the official AWS documentation a "private IP address is associated exclusively with the instance for its lifetime and is only returned to Amazon EC2 when the instance is stopped or terminated. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped.". Therefore checking nevertheless every 30s if something changed seems inherently wrong. This leaves you with two obvious options:
Update the DNS once at/after boot time
Use an elastic IP and static DNS
Used elastic IPs don't cost you anything, and even parked ones cost only little. If your instances are mostly up, use an elastic IP. If they are mostly down, go the boot time update route. If your instance sits in a VPC, not even the boot time update is strictly needed (but in a VPC you probably have different needs and a more complex network setup anyways).
Another option that you could consider is to use a software defined datacenter solution such as Amazon VPC or Ravello Systems (disclaimer: our company).
Using such a solution will allow you to create a walled off private environment in the public cloud. Inside the environment you have full control, including your own private L2 network on which you manage IP addressing and can use e.g. statically allocated IPs. Communications with the outside (e.g. your app servers) happens via the IPs and ports that you configure.

How do I connect up my Amazon EC2 instances without manually modifying config files?

I have a three-tier Windows-based web application bundled into 3 AMIs on Amazon EC2 that I use for load testing.
An ASP.NET web application on IIS
An .NET application server
SQL Server
After I launch them, the config files of each tier needs modifying to update the IP addresses.
At the moment I am doing this manually: I connect to the webserver instance via remote desktop and modify the config file to point to the new IP of the application server instance. Then I do the same with the application server to change the IP in the connection string.
This must be a common requirement and I must be missing something obvious. There must be a better way!
I could use Elastic IP addresses, but these machines are only provisioned for a couple of hours at a time, and I would be charged for the addresses when they were NOT in use (which would be most of the time).
Is there some way of persistently naming the machines? Can I somehow get all the machines on the same network and use machine names instead of IP addresses?
I could write some nifty PowerShell script that would perform the modifications remotely. Is there an example somewhere?
I could use a dynamic IP address service. I'm not sure if this would have any negative effect on performance or availability... Are there any downsides to this approach?
I could install some sort of self-configuring service on each machine (which connects to S3? SNS? SimpleDB?) to publish/retrieve the addresses of the other machines and update the config files automatically. Is there an example somewhere?
What is best practice?
You could use Amazon Virtual Private Cloud (Amazon VPC). You have a private subnet where you can assign an IP address to an instance, but it may require launching an instance from command line to assign IP. VPC is charged the same way as EC2.

Resources