How many TCP connections can a server handle from a Load Balancer - performance

As I understand, the max number of TCP connections to a server from a single client IP Address is ~64k connections.
However, what I am not clear about is max number of connections that a server can handle, behind a single load balancer considering that the connections terminate on the Load balancer. Is it ~64k only because there is only one IP from which the server can receive requests?

Indeed, upstream server can handle only 64k connections from the same client due to limitation of ephemeral port range at client side.
But you can assign several IP addresses to the same private interface of your load balancer and force server to use them in a round-robin fashion.
You can define several networks on the same interface of load balancer, for example:
192.168.1.1,
192.168.2.1,
192.168.3.1
And define corresponding extra IP addresses at upstream server:
192.168.1.2,
192.168.2.2,
192.168.3.2 .
With following upstream configuration load balancer will pass requests to the same upstream server while using different IP addresses:
upstream ipproxy {
server 192.168.1.2:some-port;
server 192.168.2.2:some-port;
server 192.168.3.2:some-port;
}
Load balancer will be forced to use different IP addresses thus allowing you to bypass 64k connection limitation and achieve 192k connections.

Related

Hosting Redis on EC2 - ConnectionTimeoutError

I have an EC2 instance behind a load balancer. The security group attached to it allows for inbound connections (both ipv4 and ipv6 on port 6379). I am able to connect to my redis client:
redis-cli -h ec2-**-**-**-*.us-west-1.compute.amazonaws.com -p 6379
However, when I try to connect with nodeJS and express-session I get a ConnectionTimeoutError on EC2, but locally it works fine:
const redisClient = createClient() // uses default port localhost:6379
redisClient.connect().catch(console.error)
If there is a race condition here, like others mentioned, why does this race condition happen on EC2 and not locally? Is the default localhost incorrect since there is a load balancer in front of the instance?
Based on your comments, I'd say the problem is the load balancer. Redis communicates on a protocol based on TCP. An ALB is only for HTTP/HTTPS traffic, so it cannot handle this protocol. Use a Network Load Balancer instead, with a TCP listener. Also make sure your security group rule also allows TCP traffic for port 6379.
Redis client should be instantiated explicitly in a setup like this one (covers both ipv4 and ipv6 inbound traffic):
createClient({ socket: { host: '127.0.0.1', port: 6379 }, legacyMode: true })
As redis is self-hosted on EC2 with a load balancer in front of the instance, localhost may not be mapped to 127.0.0.1 as a loopback address. This means that the default createClient() without a host or port specified, might try to establish a connection to a different internal, loopback address.
(Make sure to all inbound traffic to tcp 6379, or the port you are using)

OKHttp3: how to retry another IP address if one is unreachable

does OkHttp3 support the following case:
x.x.x.x myapp.com
y.y.y.y myapp.com
we have two IPs for one hostname, looks like OkHttpClient always retry the first IP address instead of trying another available IP address.
does retryOnConnectionFailure(true) support this? from the doc, by default it should support this?
Configure this client to retry or not when a connectivity problem is encountered. By default, this client silently recovers from the following problems:
Unreachable IP addresses. If the URL’s host has multiple IP
addresses, failure to reach any individual IP address doesn’t fail
the overall request. This can increase availability of multi-homed
services.
Stale pooled connections. The ConnectionPool reuses sockets to
decrease request latency, but these connections will occasionally
time out.
Unreachable proxy servers. A ProxySelector can be used to attempt
multiple proxy servers in sequence, eventually falling back to a
direct connection.
Set this to false to avoid retrying requests when doing so is
destructive. In this case the calling application should do its own
recovery of connectivity failures.
OkHttp will try both in sequence.

Requiring public IP address for kafka running on EC2

We have kafka and zookeeper installed on a single AWS EC2 instance. We have kafka producers and consumers running on separate ec2 instances which are on the same VPC and have the same security group as that of kafka instance. In the producer or consumer config we are using the internal IP address of the kafka server to connect to it.
But we have noticed that we need to mention the public IP address of the EC2 server as advertised.listeners for letting the producers and consumers connect to the Kafka server:
advertised.listeners=PLAINTEXT://PUBLIC_IP:9092
Also we have to whitelist the public ip addresses and open traffic on 9092 port of each of our ec2 servers running producers and consumers.
We want the traffic to flow using internal IP addresses. Is there a way we need not whitelist the public ip addresses and open traffic on 9092 port for each one of our servers running producer or consumer?
If you don't want to open access to all for either one of your servers, I would recommend adding a proper high performance web server like nginx or Apache HTTPD in front of your applications' servers acting as a reverse proxy. This way you could also add SSL encryption and your server stays on a private network while only the web server would be exposed. It’s very easy and you can find many tutorials on how to set it up. Like this one: http://webapp.org.ua/sysadmin/setting-up-nginx-ssl-reverse-proxy-for-tomcat/
Because of the variable nature of the ecosystem that kafka may need to work in, it only makes sense that you are explicit in declaring the locations which kafka can use. The only way to guarantee that external parts of any system can be reached via an ip address is to ensure that you are using external ip addresses.

Multiple clients - one server - one port? [duplicate]

This question already has answers here:
Does the port change when a server accepts a TCP connection?
(3 answers)
Closed 4 years ago.
I understand the basics of how ports work. However, what I don't get is how multiple clients can simultaneously connect to say port 80. I know each client has a unique (for their machine) port. Does the server reply back from an available port to the client, and simply state the reply came from 80? How does this work?
First off, a "port" is just a number. All a "connection to a port" really represents is a packet which has that number specified in its "destination port" header field.
Now, there are two answers to your question, one for stateful protocols and one for stateless protocols.
For a stateless protocol (ie UDP), there is no problem because "connections" don't exist - multiple people can send packets to the same port, and their packets will arrive in whatever sequence. Nobody is ever in the "connected" state.
For a stateful protocol (like TCP), a connection is identified by a 4-tuple consisting of source and destination ports and source and destination IP addresses. So, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port (which is generally a random high-numbered port).
Simply, if I connect to the same web server twice from my client, the two connections will have different source ports from my perspective and destination ports from the web server's. So there is no ambiguity, even though both connections have the same source and destination IP addresses.
Ports are a way to multiplex IP addresses so that different applications can listen on the same IP address/protocol pair. Unless an application defines its own higher-level protocol, there is no way to multiplex a port. If two connections using the same protocol simultaneously have identical source and destination IPs and identical source and destination ports, they must be the same connection.
Important:
I'm sorry to say that the response from "Borealid" is imprecise and somewhat incorrect - firstly there is no relation to statefulness or statelessness to answer this question, and most importantly the definition of the tuple for a socket is incorrect.
First remember below two rules:
Primary key of a socket: A socket is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL} not by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT} - Protocol is an important part of a socket's definition.
OS Process & Socket mapping: A process can be associated with (can open/can listen to) multiple sockets which might be obvious to many readers.
Example 1: Two clients connecting to same server port means: socket1 {SRC-A, 100, DEST-X,80, TCP} and socket2{SRC-B, 100, DEST-X,80, TCP}. This means host A connects to server X's port 80 and another host B also connects to the same server X to the same port 80. Now, how the server handles these two sockets depends on if the server is single-threaded or multiple-threaded (I'll explain this later). What is important is that one server can listen to multiple sockets simultaneously.
To answer the original question of the post:
Irrespective of stateful or stateless protocols, two clients can connect to the same server port because for each client we can assign a different socket (as the client IP will definitely differ). The same client can also have two sockets connecting to the same server port - since such sockets differ by SRC-PORT. With all fairness, "Borealid" essentially mentioned the same correct answer but the reference to state-less/full was kind of unnecessary/confusing.
To answer the second part of the question on how a server knows which socket to answer. First understand that for a single server process that is listening to the same port, there could be more than one socket (maybe from the same client or from different clients). Now as long as a server knows which request is associated with which socket, it can always respond to the appropriate client using the same socket. Thus a server never needs to open another port in its own node than the original one on which the client initially tried to connect. If any server allocates different server ports after a socket is bound, then in my opinion the server is wasting its resource and it must be needing the client to connect again to the new port assigned.
A bit more for completeness:
Example 2: It's a very interesting question: "can two different processes on a server listen to the same port". If you do not consider protocol as one of the parameters defining sockets then the answer is no. This is so because we can say that in such a case, a single client trying to connect to a server port will not have any mechanism to mention which of the two listening processes the client intends to connect to. This is the same theme asserted by rule (2). However, this is the WRONG answer because 'protocol' is also a part of the socket definition. Thus two processes in the same node can listen to the same port only if they are using different protocols. For example, two unrelated clients (say one is using TCP and another is using UDP) can connect and communicate to the same server node and to the same port but they must be served by two different server processes.
Server Types - single & multiple:
When a server processes listening to a port that means multiple sockets can simultaneously connect and communicate with the same server process. If a server uses only a single child process to serve all the sockets then the server is called single-process/threaded and if the server uses many sub-processes to serve each socket by one sub-process then the server is called a multi-process/threaded server. Note that irrespective of the server's type a server can/should always use the same initial socket to respond back (no need to allocate another server port).
Suggested Books and the rest of the two volumes if you can.
A Note on Parent/Child Process (in response to query/comment of 'Ioan Alexandru Cucu')
Wherever I mentioned any concept in relation to two processes say A and B, consider that they are not related by the parent-child relationship. OS's (especially UNIX) by design allows a child process to inherit all File-descriptors (FD) from parents. Thus all the sockets (in UNIX like OS are also part of FD) that process A listening to can be listened to by many more processes A1, A2, .. as long as they are related by parent-child relation to A. But an independent process B (i.e. having no parent-child relation to A) cannot listen to the same socket. In addition, also note that this rule of disallowing two independent processes to listen to the same socket lies on an OS (or its network libraries), and by far it's obeyed by most OS's. However, one can create own OS which can very well violate this restriction.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all". An easy mistake to make is to listen on address 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/dcore/tree/master/apps/quicknet) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
Normally, for every connecting client the server forks a child process that communicates with the client (TCP). The parent server hands off to the child process an established socket that communicates back to the client.
When you send the data to a socket from your child server, the TCP stack in the OS creates a packet going back to the client and sets the "from port" to 80.
Multiple clients can connect to the same port (say 80) on the server because on the server side, after creating a socket and binding (setting local IP and port) listen is called on the socket which tells the OS to accept incoming connections.
When a client tries to connect to server on port 80, the accept call is invoked on the server socket. This creates a new socket for the client trying to connect and similarly new sockets will be created for subsequent clients using same port 80.
Words in italics are system calls.
Ref
http://www.scs.stanford.edu/07wi-cs244b/refs/net2.pdf

AWS Elastic Load Balancer not responding from Internet connection

I have created one EC2 instance (as part of the provision of a Tomcat Beanstalk instance). Now I need to configure HTTPS connection to the EC2 instance. As per the Beanstalk documentation, the easiest way is to configure a load balancer that interacts with browsers using HTTPS and that routes traffic to the EC2 instance using HTTP.
So I configured a load balancer under the EC2 management console. After the configuration, I tried to ping the public DNS name of the load balancer or the resolved IP address. The target is reachable but does not produce any response, as shown below:
ping 13.54.72.179
PING 13.54.72.179 (13.54.72.179) 56(84) bytes of data.
^C
13.54.72.179 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6139ms
I carefully checked all the configurations, as per the load balancer configuration and trouble-shooting documentation. All seem to have been configured properly.
Target group: the target group has the healthy state in monitoring tab.
VPC: the load balancer availability zone and the EC2 instance are in
the same VPC zone. Also in the route table, there is an internet
gateway associated to 0.0.0.0/0 destination.
load balancer listeners: both HTTP and HTTPS listeners are
configured. Load balancer is also configured for internet-facing
connection.
Security group for load balancer: for inbound traffic, both
HTTP/HTTPS and TCP protocol are configured, accepting all sources;
for outbound traffic: all protocols to all destinations are allowed.
Security group for EC2: for the purpose of testing, we enable all
traffic for all sources in inbound traffic.
I researched a few forum threads about the "load balancer not responding" topic and checked the configurations they mentioned. However, none of them worked for me.
So I am at loss now. Can someone enlighten me where I might have missed in configuring the load balancer? Or what I need to do for trouble-shooting?

Resources