Server receiving data after connection has been made - client

A server is listening, let's say, on port 3000. When he receives a connection request and the connection is successful, if i call a function, let's say "getRemotePort" it will say 1234. My question is, the server will send data to the remote devise (client) on port 1234, but what about the other way around? Will the client keep sending data on the same port, in this case 3000? So everything that the server will receive (connection requests and other data) will come through the same port?

Yes it will
This is not a problem.
The point behind this is, that a connection is defined by the (LocalIP, LocalPort, RemoteIP, RemotePort) tupel - this is the only combination, that has to be unique.
On the performance side, this is no problem as well: A port is a logic construct, that has no limiting effect on the throughput of a connection, some edge cases aside (Very high latency combined with very high throughput can create a case, where a single connection can not saturate a physical link, so a second connection, requiring a second port, can speed things up. Mind though, that even in this case not the port count, but the connection count is to blame - they just happen to be 1:1)

Related

Something like the Zero MQ REP/REQ model but without having to reply?

Currently I have a REP/REQ model up and running in my code.
However, I do not need for either to send replies. So replies are just wasting time.. I don't know if that matters in the real world or not.
Basically it looks like this.
Client PCs - Connect - REQ
these guys all connect to the Server and update the Server with Info they have on a regular basis. They don't care if the Server didn't receive a particular message, nor do they need any info back from the Server.
there are many of these guys but not excessive.. Let's say between 10 and 100.. all hitting the same server.. well probably not, probably it will be in groups.. a group of them hit one server, another group another.. clients would send messages several times a second. But not much more than several. I have not really done any timing, I don't know how really to time on my computer at less than 1-2 ms resolution so I really don't know what to expect or what is feasible in terms of performance and how many REQ clients can be served by 1 server REP.
Server PC - Bind - REP
this guy sits there running in a loop on his own separate thread waiting for REQs to come in. He sends replies to the REQs because he has to, not because he really wants to or needs to.
Alternate Models
from some googling it seems that PUSH PULL was recommended if you just want to sent messages and don't care about replies.
However, I couldn't figure out how to fit that into my architecture because the binds and connects seem to be reversed from what I need to have.. I would like my Bind to be on the Server because the Client "Connect" guys are not always available to be reached..
Solutions
1) good alternate model
A good alternate model that works and is relatively simple would be great. I'm not sure there really is one but apart from REP/REQ and PUB/SUB I don't really know too much about other models.
2) I'm worrying about nothing?
if message replies to REQ by REP are always going to be really fast and the reception of those replies by by REQ from REP also are really fast, then I guess I'm worrying about nothing. That would be good to know, so feel free to let me know if this is the case.
The Connection question
I don't really understand what connecting sockets does.
On a client REQ should I make a connect at the start of each loop before sending that one single message? Or should I connect before the loop to my socket that I also created before the loop?
I also don't understand what this means in terms of reliability or if I have to make special checks about connected status and reconnect, or if that is done automatically.
To sum up
I have a "global" context.. created at the start, disposed of at the end
This daddy context has 1 or 2 sockets (connected to the same address, including port) - I'm still debugging this dual socket on the same address thing so I'm not sure if that is ok or it just doesn't work that way - clarification would be nice
These context(s) are lazy initialized and outside the loop scope, so we are not recreating sockets on a regular basis
connect calls for the sockets occur currently outside of the loop scope, but I'm not sure if it is not better to have them inside the loop scope.
I think I'm getting mixed up here.. I think the dual sockets are on my PUB/SUB model .. 1 PUB with 2 SUB sockets on each client, but anyhow please let me know if that would be a problem as well.
If you do not need Request-Reply, do not use it.
Request-Reply is generally slow because you need a round trip to the server for every message. This means you get twice the network latency, which is the time a network package needs to travel over the network. That does not matter if network traffic is low but will become a bottleneck when the traffic is high, for example multiple messages per second.
As you already mentioned Push-Pull is a valid alternative for one-way traffic. With Push-Pull you create a Pull socket on the server and bind it to an endpoint (this is similar to the Reply socket). You create a Push socket on the clients and connect it to the server endpoint (this is similar to the Request socket).
If you send multiple messages from the client to the same server, you should connect only once. Setting up a network connection is a costly operation because it requires multiple network round trips, at least for TCP.

How to create two udp sockets where one is sending requests and another one receiving the answers?

I'm looking for a proper way to have one goroutine sending out request packets to specific servers while a second goroutine receiving the responses and handling them, maybe even create a new goroutine for each response to handle.
The architecture of the game is that there are multiple masterservers, which can be asked for ip lists of registered servers.
After getting the ips and ports from the masterservers, each of the ips gets a request for its data, like server name, map, players, etc.
Also, are there better ways to handle this?
Currently I am creating a goroutine per request that also waits for a response afterwards.
The waiting for a response timeouts after 35ms and continues to send 1.2 times the previous amount of request packets to have a small burst of requests. Also the timeout is doubled on every retry.
I'd like to know if there are better strategies that have proven to be more robust and have a lower latency, that are not too complex.
Edit:
I only create the client side sockets, but would have, if there is no better approach, a client that sends UDP request packets that contain a different socket's address as sender value in order to receive the answers on a different socket that acts kind of like a server, where all the response packets are collected. In order to separate the sending socket from the receiving socket.
This question is tagged as client-server as one of the sockets is supposed to act like a server, even tho all it does is receive expected answers in response to request packets sent by the client socket.

Data structure used for a buffer

I attended a developer interview recently and I was asked the following question:
I have a server that can handle 20 requests. Which data structure is used to model this? What will happen if thee are more than 20 requests? i.e., What will you do in case of buffer overflow?
I am not from CS background. I am transitioning from a different field. I am self taught in programming and DSA. So I would like to know the answers for these questions. Thanking in advance!
Regarding a server that can handle 20 simultaneous requests:
Your question indicates that you are not yet thinking about this is in a reasonable way and are probably quite far from understanding how it works. No problem -- it just means that maybe you have more to learn than you expect.
To help you along, I will write you the correct answer, full of terms you can google for:
When a client attempts to connect to your server, the kernel puts his request in to a 'listen queue' attached to your server's listening 'socket'.
When your server is ready to service a request, it 'accepts' a connection from the listening socket, which creates a new socket for the communication between the client and server, and the server then processes the request.
If your server can handle 20 simultaneous requests, it typically means that it can have up to 20 threads processing connections at the same time. That is usually accomplished by using a 'thread pool' of limited size. When a thread in the pool is available, it gets a new connection from the listening socket (might have to wait for one), and processes it, and it is only the fact that there are at most 20 of these threads that limits the number of request you will handle simultaneously. (nothing to do with a buffer of any kind, really)
If the server is already processing 20 simultaneous requests when a new one comes in, then the client's request will wait in the socket listen queue until the server eventually picks it up, or it will timeout and fail if it has been waiting too long.
There is also a limit (the TCP backlog) on the number of connection requests that can be waiting in the listen queue. If a connection request comes in when the listen queue is full, it is immediately rejected. If you want your server to handle 20 simultaneous requests, then the listen queue should have length at least 20 in case 20 requests arrive at the same time -- they will all get queued until your server picks them up.

Is there any chance of conflict if I make 100 simultaneous http request asynchronously to same destination from same source?

(1) If I make a hundred http request asynchronously from a client application to a single destination(i.e- same ip/port), is there any chance of conflict in the client side?
What I understand is whenever an application makes a http request the OS assigns a random port as source, and the server response is sent to that source port only. As the requests are asynchronous and too many, can there be cases where OS assigns a same source port to another of this 100 request, and when the server responses actually for the first request the second request also receives that response?
(2) Even if conflict is not probable for 100 request, is there any upper limit to this(because ports are limited, and number of simultaneous requests made are nearly same or more)?
(3) And is the scenario same for all applications(whether using a Winforms client or a curl)?
You can create maximum of 65535 (2^16 - 1) ports in a system - including server and client ports.
Ans 1: The ports won't overlap/conflict when you make 100 or above simultaneous requests. But make sure at the server side, whether you can do such huge requests from a particular system/network.
Ans 2: Upper limit is 65535.
And 3: Yes, this limit is for all the ports used by the application running in the system.

How to determine number of concurrent session made by Individual IPs

I am working on an analyzer script. It is a simple bash script that apply some logic on tcpdump sniffed capture.
My task is to find out number of concurrent sessions made by individual IPs. The logic I have applied is I have counted different source ports request by each ip for same destination IP and port i.e. 3128 as it is a proxy server.
For example, consider my dest ip is 172.31.1.1 and dest port is 3128
Now I have sniffed traffic only limited for this dest port and dest ip.
Then I have filtered out source ip and source port pair for each packet.
then I have counted number of different source port for each source IP and I think that would be equal to number of concurrent sessions made by each individual IP with this proxy server.
Now by looking at the output on a running proxy server for a 10,000 packets sample, number of sessions by each IP goes like 300,250,200 and some less also. For 1 lakh, it goes like 3000,2500 also.
Is there something wrong with my interpretation of sessions as number of concurrent session allowed by firewall is 100 per IP.
As I mentioned in my comment, if you want to know number of TCP connections from single source IP at any given time, you will need to figure out connection establishment (TCP three way handshake) and termination (four-way tear-down and reset) points. Otherwise you are counting all TCP connection, established and attempted from given IP, for the whole duration of the capture (but since ephemeral client ports could be recycled during the capture period even this count might not be accurate).
I should mention that incrementing running count of connections on a SYN and decrementing it on a FIN or RST is not going to be enough, since TCP tend to re-transmit packets. You'll need to track TCP states, so good familiarity with TCP state diagram is probably in order:
(from (http://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Tcp_state_diagram_fixed.svg/250px-Tcp_state_diagram_fixed.svg.png).

Resources