tcp_tw_recycle behind application level load balancer? - amazon-ec2

Given that our linux servers never open direct connections to our clients, is it safe to use tcp_tw_recycle on them ?
Those servers are behind a application level load-balancer and all the connections i see on them are between internal 10.x.x.x addresses.
Thanks

We have such a load balancer provided by AWS (ELB), so I'll provide my advice based on that:
Why gamble? If your overhead/port-consumption is coming from quick client connections, Amazon recommends enabling persistent connections on your ELB instead. (I asked them about this question specifically and got that recommendation...our Amazon contact does not recommend enabling tcp_tw_recycle).
That said, if, say it's another internal box they're struggling to establish rapid connections with (apache-php chatting with MySQL on behalf of the client without persistent connections), you might be able to get away with it:
If ALL client connections will be via the ELB (please set your security group accordingly), then technically speaking you shouldn't encounter problems for the tcp_tw_recycle timestamp jumping cases I'm aware of:
ELB is a termination point on behalf of the client (their NAT firewall won't factor in, and ELB is not NAT based)
The ELB box(es) will not reset themselves, acquire the same IP address, and still be assigned as your ELB (will be someone else's if it happens at all)
The ELB box(es) will not be replaced by another ELB machine using the same IP and still be serving your traffic as your ELB (will be someone else's if it happens at all)
*2 and 3 are not a guarantee from Amazon, but it does appear to be their behavior, just as stop/start will get you a new private IP for EC2 boxes). If that did happen, I'd imagine it is a thing of extremely low probability.
You could theoretically run into issues restarting your own boxes if they communicate with other service machines (like MySQL or memcached) and you restart (not stop/start) one of your boxes, or move their elastic IP to another box and are not using private IPs for internal chatter. But you have some control over this. However, if it's all on the AWS cloud (or your fast internal network), issues are extremely unlikely (unless your AWS zone is having a bad day, and you're restarting/replacing your systems for that reason).
A buddy and I had a long-standing argument about this, and he won by proving his point with a long running 4k browser (fast script) load test via Neustar...there were no connection issues from the client side via ELB, and eliminating the overhead helped quite a bit :-)
If you haven't already, consider tcp_tw_reuse (we were using this to keep the ephemeral port range active before the above mentioned test showed the additional merit of eliminating the overhead with tcp_tw_recycle for us). Be sure to watch your counters on ifconfig if you do decide to disable that chunk of the protocol ;-P.
The following is also a good summary resource on the topic of timestamps jumping: Dropping of connections with tcp_tw_recycle

Related

For a given Heroku Dyno *instance*, is the external IP address stable for the lifetime of that instance?

Can I count on the external IP address being stable for any given Dyno instance.
That is, my Dyno boots up and makes a request to some external service. That service notes the incoming IP address. Could that service assume that any subsequent traffic from that same Dyno instance would come from the same IP address. Will the same apply if the same Dyno makes a request to a very different endpoint?
I understand that Heroku makes no guarantees about Dyno addressing unless you upgrade to the Private level offering (or otherwise spend more on addons or Enterprise features). I'm not looking to know in advance which IPs to expect, just whether it's stable.
I assume the architecture is fairly obvious: containers running on VMs that have outbound network access using the VM's interface, so external IPs for outbound connections are going to be the VM IP address. However, Heroku emphasizes it's Routing layer and makes it sound complicated, so you never know if they have some kind of outbound routing complexity as well, which is what I'm worried about.
On a broad level, you should never expect IP addresses to be stable on Heroku by default. This applies to DNS targets, hence the requirements for CNAMEs everywhere, and outbound IPs.
Regarding the specific question, yes, a single specific Dyno instance will have the same outbound IP address, but that means it will only be stable for ~24 hours (+3 1/2 hours possibly, see /Dynos#restarting) at most. After web.1's daily cycle, the newly launched web.1 will have a new public IP address. web.1, web.2, web.3, web.#…, along with any/all other process groups' Dynos will likely never have the same public IP address at the same time.
There are means for stabilizing outbound IPs longer term, as is accomplished by various Proxy partner add-ons, or any other Proxy service you choose to use.

Algorithm/Method to save costs of running filtering proxy on AWS EC2

I am trying to set up a Squid Proxy combined with DansGuardian Content filtering engine on EC2. I will be filtering traffic from mobile(IOS/Android) clients via this filtered proxy but that could mean a lot of traffic flowing through my system, since I will have to route all of the traffic through the DNS, which inturn could mean a lot Amazon EC2 costs!. Is there a known method/standard in which I can direct only known blacklisted traffic via this proxy in a cost effective manner?. Things I have explored include creating blacklists on the device and filtering right there , but that might mean I have to keep going back and changing (adding or removing sites) and this is not really feasible anyway.

Do I *really* need RPC and NETBIOS to use transactional NServiceBus queues between local servers and Amazon EC2?

We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.

When would you need multiple servers to host one web application?

Is that called "clustering" of servers? When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load? Also, is one "server" that's up and running the application called an "instance"?
[...] Is that called "clustering" of servers?
Clustering is indeed using transparently multiple nodes that are seen as a unique entity: the cluster. Clustering allows you to scale: you can spread your load on all the nodes and, if you need more power, you can add more nodes (short version). Clustering allows you to be fault tolerant: if one node (physical or logical) goes down, other nodes can still process requests and your service remains available (short version).
When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load?
In general, this is the job of a dedicated component called a "load balancer" (hardware, software) that can use many algorithms to balance the request: round-robin, FIFO, LIFO, load based...
In the case of EC2, you previously had to load balance with round-robin DNS and/or HA Proxy. See Introduction to Software Load Balancing with Amazon EC2. But for some time now, Amazon has launched load balancing and auto-scaling (beta) as part of their EC2 offerings. See Elastic Load Balancing.
Also, is one "server" that's up and running the application called an "instance"?
Actually, an instance can be many things (depending of who's speaking): a machine, a virtual machine, a server (software) up and running, etc.
In the case of EC2, you might want to read Amazon EC2 Instance Types.
Here is a real example:
This specific configuration is hosted at RackSpace in their Managed Colo group.
Requests pass through a Cisco Firewall. They are then routed across a Gigabit LAN to a Cisco CSS 11501 Content Services Switch (eg Load Balancer). The Load Balancer matches the incoming content to a content rule, handles the SSL decryption if necessary, and then forwards the traffic to one of several back-end web servers.
Each 5 seconds, the load balancer requests a URL on each webserver. If the webserver fails (two times in a row, IIRC) to respond with the correct value, that server is not sent any traffic until the URL starts responding correctly.
Further behind the webservers is a MySQL master / slave configuration. Connections may be mad to the master (for transactions) or to the slaves for read only requests.
Memcached is installed on each of the webservers, with 1 GB of ram dedicated to caching. Each web application may utilize the cluster of memcache servers to cache all kinds of content.
Deployment is handled using rsync to sync specific directories on a management server out to each webserver. Apache restarts, etc.. are handled through similar scripting over ssh from the management server.
The amount of traffic that can be handled through this configuration is significant. The advantages of easy scaling and easy maintenance are great as well.
For clustering, any web request would be handled by a load balancer, which being updated as to the current loads of the server forming the cluster, sends the request to the least burdened server. As for if it's an instance.....I believe so but I'd wait for confirmation first on that.
You'd' need a very large application to be bothered with thinking about clustering and the "fun" that comes with it software and hardware wise, though. Unless you're looking to start or are already running something big, it wouldn't' be anything to worry about.
Yes, it can be required for clustering. Typically as the load goes up you might find yourself with a frontend server that does url rewriting, https if required and caching with squid say. The requests get passed on to multiple backend servers - probably using cookies to associate a session with a particular backend if necessary. You might have the database on a separate server also.
I should add that there are other reasons why you might need multiple servers, for instance there may be a requirement that the database is not on the frontend server for security reasons

Detecting dead applications while server is alive in NLB

Windows NLB works great and removes computer from the cluster when the computer is dead.
But what happens if the application dies but the server still works fine? How have you solved this issue?
Thanks
By not using NLB.
Hardware load balancers often have configurable "probe" functions to determine if a server is responding to requests. This can be by accessing the real application port/URL, or some specific "healthcheck" URL that returns only if the application is healthy.
Other options on these look at the queue/time taken to respond to requests
Cisco put it like this:
The Cisco CSM continually monitors server and application availability
using a variety of probes, in-band
health monitoring, return code
checking, and the Dynamic Feedback
Protocol (DFP). When a real server or
gateway failure occurs, the Cisco CSM
redirects traffic to a different
location. Servers are added and
removed without disrupting
service—systems easily are scaled up
or down.
(from here: http://www.cisco.com/en/US/products/hw/modules/ps2706/products_data_sheet09186a00800887f3.html#wp1002630)
Presumably with Windows NLB there is some way to programmatically set the weight of nodes? The nodes should self-monitor and if there is some problem (e.g. a particular node is low on disc space), set its weight to zero so it receives no further traffic.
However, this needs to be carefully engineered and have further human monitoring to ensure that you don't end up with a situation where one fault causes the entire cluster to announce itself down.
You can't really hope to deal with a "byzantine general" situation in network load balancing; an appropriately broken node may think it's fine, appear fine, but while being completely unable to do any actual work. The trick is to try to minimise the possibility of these situations happening in production.
There are multiple levels of health check for a network application.
is the server machine up?
is the application (service) running?
is the service accepting network connections?
does the service respond appropriately to a "are you ok" request?
does the service perform real work? (this will also check back-end systems behind the service your are probing)
My experience with NLB may be incomplete, but I'll describe what I know. NLB can do 1 and 2. With custom coding you can add the other levels with varying difficulty. With some network architectures this can be very difficult.
Most hardware load balancers from vendors like Cisco or F5 can be easily configured to do 3 or 4. Level 5 testing still requires custom coding.
We start in the situation where all nodes are part of the cluster but inactive.
We run a custom service monitor which makes a request on the service locally via the external interface. If the response was successful we start the node (allow it to start handling NLB traffic). If the response failed we stop the node from receiving traffic.
All the intermediate steps described by Darron are irrelevant. Did it work or not is the only thing we care about. If the machine is inaccessible then the rest of the NLB cluster will treat it as failed.

Resources