UDP packet loss in aws EC2 - amazon-ec2

I have an application that listens to a particular port in AWS where it is hosted. I will be receiving UDP packets from a number of AWS servers to this port. The problem I am facing is I am able to receive the packets from all the AWS servers only on a random basis. i.e, Consider AWS1 AWS2 AWS3 are 3 AWS servers which will send the UDP packets to the AWS server where my application is hosted, then the scenario is like this, if I am getting the packets from AWS2 first I will receive the packets from AWS2 alone to the particular port my application is listening, I won't receive any packets from the other two AWS servers. If gets from AWS3 first then I will get packets only from that server but not from the remaining 2 servers I have configured inbound as 0.0.0.0/0 i.e, all traffic for all protocols, but still this problem persists.
Can anybody help me please..

Related

Firewall blocks connection to second WebSocket server

In short we have two separate servers for our web app. The first one is the main server that uses Websockets for handling "chat rooms", and the second server only handles WebRTC audio chat rooms via Websocket. Both servers use Express to create a HTTPS server, use secure Websocket and the port 443.
I recently encountered a problem where a corporate client's firewall blocked the wss-connection to only the WebRTC server. The error logged in the user's browser was "ERR_CONNECTION_TIMED_OUT", which means the user never connects via Websocket. This has not happened with any other clients.
The Websocket connection works normally between the user and the main server, and no rules have been added to their firewall to use our app.
Has anyone encountered something similar? What kind of a firewall setting might cause this? Could this be a cors problem, since the servers are on their own sub-domains?
The main server could be restricting the type of data sent on port 443, which will use SSL to secure that transmitted data.
Refer to this page for information on the "Well-know port numbers".
The WebRTC audio data may need to be transmitted on its own dedicated port number that has been configured on the main server for this.
The problem was that the main server WebSocket used TCP and the WebRTC server used UDP, and UDP was blocked by corporate firewall on default.
WebRTC should use TCP as a backup, but I'm assuming UDP is still needed for the handshake.

Why browsers can receive incoming connections and other soft's cant!?

my question is simple
When you send data through TCP/IP protocol with EX:firefox you can receive reply on some random port that the browser listen on, while when i try to use a port for another task like CS Gaming or anything else it don't work unless i use kind of VPN ?
PS: there r no firewall blocking connection and port forwarding from my router didn't work as well.
Browsers are client apps that make outbound connections to web servers. When connecting to a server through a router’s NAT, the NAT takes note of the source and destination IP/port pairs so messages sent back from the server on the same connection are automatically routed to the correct client IP/port.
Browsers also support the websocket protocol. This feature makes it seem like the browser is listening on a specific port. However, in reality, it is initiated on a new connection to the server, a connection which remains open all throughout the websocket communication.
What matters is which peer is behind the NAT — the server or the client. For an outbound connection from a client, it can usually use any random port that is available at the time. For an inbound connection to a server, the server's IP/port must be known ahead of time and be routable. If the server is behind a NAT, the router(s) must be configured to make the server reachable from the other side of the NAT.
The server software can make a UPnP request to ask a router to forward inbound packets to the correct IP/Port. The router, depending on its configuration, may or may not honor such a request. If not, the router has to be configured manually by a network administrator.

When 2 servers connected to same socket redis adapter. Is both of them get messages from the client same time?

I have two servers server-a and server-b.
For using socket.io usually, the two servers are using redis adapter. Then the client can connect to server-a or server-b.
Now the question is: If the client is connected to server-a and emit a message. Is server-b have an option to get the message?
The client code:
io.emit('sendMessage',myMessage)
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server A got the message')
}
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server B got the message')
}
The client is connected only to server-a. server-a & server-b are using the same redis adapter.
The question is: When client emit a message, is server-b will get it? (Server-B is only connected to the same redis)
What I want to do: I have several servers that should do an action, based on client request. When client request something, all the servers needs to start works. I thought to do with socket.io, and to keep one connection between the client and on of the servers.
All the servers will use socket.io to get the same message from the client.
If you are using the redis adapter properly with all your servers, then when you do something like:
io.emit('sendMessage',myMessage)
from any one of your servers, then that message will end up being sent to all the clients connected to all your servers. What happens internally is the message is sent to a redis channel which all the servers are listening to. When each server gets the message, they then broadcast to all their users but these last steps are transparently handled for you by the redis adapter and redis store.
So, io.emit() is used to send to all connected clients (which uses all the servers in order to carry out the broadcast). It is not used to broadcast the same message directly to all your servers so that they can each manually process that message.
To send to each of your servers, you could probably use your own custom redis publish/subscribe channel messages since each server is already connected to redis and this is something that redis is good at.
Or, you could designate one master socket.io server and have all the other servers connect to it with socket.io. Then any server could ask the central server to broadcast a message to all the other servers.

OpenJMS - Client port number

With regards to the problem Bart is having in NAT router blocking JMS messages
I am trying to find the port number that clients receive openJMS messages on. After searching for ages on the web I can only find information about the server ports, nothing on the client. This is for a tcp connection.
If anyone can point me in the right direction I would be very grateful.
Thanks!
In general, the client port number will be different for each new connection. I could find no evidence that OpenJMS clients use specific port numbers when communicating with servers. Here are a few explanations.
Port Numbers
When a client process first contacts a server process, it may use a
well-known port number to initiate communication. Well-known port
numbers are assigned to particular services throughout the Internet,
by IANA, the Internet Assigned Numbers Authority. The well-known port
numbers are in the range 0 through 1023.
Well-known ports are used only to establish communication between
client and server processes. When this has been done, the server
allocates an ephemeral port number for subsequent use. Ephemeral port
numbers are unique port numbers which are assigned dynamically when
processes start communicating. They are released when communication is
complete.
TCP/IP Client (Ephemeral) Ports and Client/Server Application Port Use
In contrast, servers respond to clients; they do not initiate contact
with them. Thus, the client doesn't need to use a reserved port
number. In fact, this is really an understatement: a server shouldn't
use a well-known or registered port number to send responses back to
clients. The reason is that it is possible for a particular device to
have both client and server software of the same protocol running on
the same machine. If a server received an HTTP request on port 80 of
its machine and sent the reply back to port 80 on the client machine,
it would be sending the reply to the client machine's HTTP server
process (if present) and not the client process that sent the initial
request.
To know where to send the reply, the server must know the port number
the client is using. This is supplied by the client as the Source Port
in the request, and then used by the server as the destination port to
send the reply. Client processes don't use well-known or registered
ports. Instead, each client process is assigned a temporary port
number for its use. This is commonly called an ephemeral port number.
Similar answer on another question: How to decide on port number between client and server communication on internet:
Also, a client can connect to many servers on the same port. When the
clients connect, they will use a random port on there end.
Only the server needs to worry about using a free port, and the
clients need to know what this port is else they will not be able to
connect to your server.
Other possible help:
How to find number of ephemeral ports in use?

Getting (non-HTTP) Client IP with load-balancer

Say I want to run something like the nyan cat telnet server (http://miku.acm.uiuc.edu/) and I need to handle 10,000 concurrent connections total. I have 10 servers in addition to a load balancer. Each server can handle 1,000 concurrent connections, and I want to put a load balancer in front of it to randomly divide the traffic to the 10 servers.
From what I've read, it's fairly simple for a load balancer to pass an HTTP request (along with the client IP) to the backend server, perhaps with FastCGI or with an X- header.
What would be the simplest way for the load balancer to pass the client IP to the backend server in this case with a simple TCP server? Would a hardware load balancer be needed, or are there ways to do this simply through software?
In other words, is there a uniform way to pass client IP when load balancing for non-HTTP stuff? The same way Google gets client IP when they load-balances Google Talk XMPP server or their Gmail IMAP server
This isn't for anything in specific; I'm just curious about if and how it can be done. Thanks in advance!
The simplest way would be for the load balancer to make itself completely invisible and pass the connection on with the source and destination IP address unmolested. For this to work, the same IP address must be assigned (as a loopback address, not to a physical interface) to all 10 servers and that would be the IP address the clients connect to. Internet traffic to that IP address has to go to the load balancer. The load balancer must be the default gateway for the servers.

Resources