Tidtcpserver listening on multiports? - indy

alright i am trying to understand this approach , lets say i have run 2 servers
Server A is on Ip 1.1.1.1 and port 36663
Server B is on ip 2.2.2.2 and port 54223
i am asking to be able to understand this approach
can i make clients on server A be able to communicate with clients on Server B ?
as example a client is connecting on Server A and want to send some data to a client who connecting to server B is this can be done using indy tcp server ?
if the answer is yes an example will be much helpful to fully understand this approach .
i have 2 servers on different machine
1 machine have some slow network issue and the other have good network.
the logic here is when the client to serverA that takes more than 20 seconds to connect, during this 20 seconds try to reconnect to other server ip and be able to communicate with the client that already coonnected on serverA

TIdTCPServer has a Bindings property, which is a collection of IP/Port pairs that the server listens on. You can have a single TIdTCPServer object listening on multiple IP/Port pairs, or you can use multiple TIdTCPServer objects listening on different pairs, on the same machine.
Either way, the connected clients are stored in the TIdTCPServer.Contexts property.
When a client wants to send data to another client, regardless of which server IP/Port it is connected to, all you have to do is iterate through the Contexts list of the appropriate TIdTCPServer object until you find the TIdContext object of the target client, and then you will have access to its Connection.IOHandler property.
On the other hand, if you have separate TIdTCPServer objects running on different machines, clients cannot directly communicate with clients on another server. You would have to establish a connection between the two servers and then you can proxy any client-to-client data through that connection as needed.

Related

Multiple clients - one server - one port? [duplicate]

This question already has answers here:
Does the port change when a server accepts a TCP connection?
(3 answers)
Closed 4 years ago.
I understand the basics of how ports work. However, what I don't get is how multiple clients can simultaneously connect to say port 80. I know each client has a unique (for their machine) port. Does the server reply back from an available port to the client, and simply state the reply came from 80? How does this work?
First off, a "port" is just a number. All a "connection to a port" really represents is a packet which has that number specified in its "destination port" header field.
Now, there are two answers to your question, one for stateful protocols and one for stateless protocols.
For a stateless protocol (ie UDP), there is no problem because "connections" don't exist - multiple people can send packets to the same port, and their packets will arrive in whatever sequence. Nobody is ever in the "connected" state.
For a stateful protocol (like TCP), a connection is identified by a 4-tuple consisting of source and destination ports and source and destination IP addresses. So, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port (which is generally a random high-numbered port).
Simply, if I connect to the same web server twice from my client, the two connections will have different source ports from my perspective and destination ports from the web server's. So there is no ambiguity, even though both connections have the same source and destination IP addresses.
Ports are a way to multiplex IP addresses so that different applications can listen on the same IP address/protocol pair. Unless an application defines its own higher-level protocol, there is no way to multiplex a port. If two connections using the same protocol simultaneously have identical source and destination IPs and identical source and destination ports, they must be the same connection.
Important:
I'm sorry to say that the response from "Borealid" is imprecise and somewhat incorrect - firstly there is no relation to statefulness or statelessness to answer this question, and most importantly the definition of the tuple for a socket is incorrect.
First remember below two rules:
Primary key of a socket: A socket is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL} not by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT} - Protocol is an important part of a socket's definition.
OS Process & Socket mapping: A process can be associated with (can open/can listen to) multiple sockets which might be obvious to many readers.
Example 1: Two clients connecting to same server port means: socket1 {SRC-A, 100, DEST-X,80, TCP} and socket2{SRC-B, 100, DEST-X,80, TCP}. This means host A connects to server X's port 80 and another host B also connects to the same server X to the same port 80. Now, how the server handles these two sockets depends on if the server is single-threaded or multiple-threaded (I'll explain this later). What is important is that one server can listen to multiple sockets simultaneously.
To answer the original question of the post:
Irrespective of stateful or stateless protocols, two clients can connect to the same server port because for each client we can assign a different socket (as the client IP will definitely differ). The same client can also have two sockets connecting to the same server port - since such sockets differ by SRC-PORT. With all fairness, "Borealid" essentially mentioned the same correct answer but the reference to state-less/full was kind of unnecessary/confusing.
To answer the second part of the question on how a server knows which socket to answer. First understand that for a single server process that is listening to the same port, there could be more than one socket (maybe from the same client or from different clients). Now as long as a server knows which request is associated with which socket, it can always respond to the appropriate client using the same socket. Thus a server never needs to open another port in its own node than the original one on which the client initially tried to connect. If any server allocates different server ports after a socket is bound, then in my opinion the server is wasting its resource and it must be needing the client to connect again to the new port assigned.
A bit more for completeness:
Example 2: It's a very interesting question: "can two different processes on a server listen to the same port". If you do not consider protocol as one of the parameters defining sockets then the answer is no. This is so because we can say that in such a case, a single client trying to connect to a server port will not have any mechanism to mention which of the two listening processes the client intends to connect to. This is the same theme asserted by rule (2). However, this is the WRONG answer because 'protocol' is also a part of the socket definition. Thus two processes in the same node can listen to the same port only if they are using different protocols. For example, two unrelated clients (say one is using TCP and another is using UDP) can connect and communicate to the same server node and to the same port but they must be served by two different server processes.
Server Types - single & multiple:
When a server processes listening to a port that means multiple sockets can simultaneously connect and communicate with the same server process. If a server uses only a single child process to serve all the sockets then the server is called single-process/threaded and if the server uses many sub-processes to serve each socket by one sub-process then the server is called a multi-process/threaded server. Note that irrespective of the server's type a server can/should always use the same initial socket to respond back (no need to allocate another server port).
Suggested Books and the rest of the two volumes if you can.
A Note on Parent/Child Process (in response to query/comment of 'Ioan Alexandru Cucu')
Wherever I mentioned any concept in relation to two processes say A and B, consider that they are not related by the parent-child relationship. OS's (especially UNIX) by design allows a child process to inherit all File-descriptors (FD) from parents. Thus all the sockets (in UNIX like OS are also part of FD) that process A listening to can be listened to by many more processes A1, A2, .. as long as they are related by parent-child relation to A. But an independent process B (i.e. having no parent-child relation to A) cannot listen to the same socket. In addition, also note that this rule of disallowing two independent processes to listen to the same socket lies on an OS (or its network libraries), and by far it's obeyed by most OS's. However, one can create own OS which can very well violate this restriction.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all". An easy mistake to make is to listen on address 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/dcore/tree/master/apps/quicknet) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
Normally, for every connecting client the server forks a child process that communicates with the client (TCP). The parent server hands off to the child process an established socket that communicates back to the client.
When you send the data to a socket from your child server, the TCP stack in the OS creates a packet going back to the client and sets the "from port" to 80.
Multiple clients can connect to the same port (say 80) on the server because on the server side, after creating a socket and binding (setting local IP and port) listen is called on the socket which tells the OS to accept incoming connections.
When a client tries to connect to server on port 80, the accept call is invoked on the server socket. This creates a new socket for the client trying to connect and similarly new sockets will be created for subsequent clients using same port 80.
Words in italics are system calls.
Ref
http://www.scs.stanford.edu/07wi-cs244b/refs/net2.pdf

How to get the IP address of a connected WebSocket-client?

I'm currently working on a ABAP Push Channel server to WebSocket client connection and I need the IP-address of the client in order to identify whether this client is the one I want to send the message to. In my scenario there could be multiple WebSocket connections.
Now there is the ssi_websocket_table table and the ssi_websocket_table_row row with the the field caller_ip, however this gives me the IP address of the DNS-Server of the network I'm connected to, and I expected the IP address of my local PC since the WebSocket-client is running on this machine.
Is there any other way to get the clients IP address from an active WebSocket connection in ABAP?
P.S. Looking at all the table entries, it shows the correct IP when using a different server configuration, as soon as I know why that's the case I will report back.
As pointed out by vwegert it makes no sense to use the IP to tell the WebSockets apart, I think it would probably be better to use an ID for each WebSocket connection instead.
You could get the IP from the WebSocket server context which gets the IP header apparently from the opening HTTP handshake for the connection:
DATA(lo_context) = i_context. " IF_APC_WSP_SERVER_CONTEXT type
DATA(lo_request) = lo_context->get_initial_request( ).
" initialize G_CONTEXT_ID_FIELD for PCP_SET_CONTEXT_FIELDS
DATA(lv_id) = lo_request->get_header_field( if_http_header_fields_sap=>remote_addr ).
the sample is taken from the SAP standard class CL_APC_WS_EXT_ABAP_ONLINE_COMM, ON_MESSAGE method.

On the linux network socket server machine, what happens when all network ports are allocated for clients

On the linux network socket server machine, what happens when all network ports are allocated for clients? If it happens, the connection request from clients are denied, or delayed? If that's right, is it right to think that one linux machine can serve at most the number of open ports simultaneously? (under assumption that all other resources are enough)
If that's right, is it right to think that one linux machine can serve at most the number of open ports simultaneously?
No, the port is not the limiting factor here. TCP connected socket is actually a quintuple (src_port, src_address, dest_port, dest_address, protocol).
So, for every server listening on one port, every client will be able to make whatever is set in ip_local_port_range connections using the same protocol.
However, you can work around this - if you have more IP addresses (you could use IP aliasing for this, even if you don't have more than one interface), or if your server is listening on more than one port, the number of possible connections goes up.
Resources:
http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html

Socket connection rerouting

Most proxy servers perform the job of forwarding data to an appropriate "real" server. However, I am in the process of designing a distributed system in which when the "proxy" receives a TCP/IP socket connection, the remote system actually connects with a real server which the proxy nominates. All subsequent data flows from remote to the real server.
So is it possible to "forward" the socket connection request so that the remote system connects with the real server?
(I am assuming for the moment that nothing further can be done with the remote system. Ie the proxy can't respond to the connection by sending the IP address of the actual server and the remote connections with that. )
This will be under vanilla Windows (not Server), so can't use cunning stuff like TCPCP.
I assume your "remote system" is the one that initiates connection attempts, i.e. client of the proxy.
If I get this right: when the "remote system" wants to connect somewhere, you want the "proxy server" to decide where the connection will really go ("real server"). When the decision is made, you don't want to involve the proxy server any further - the data of the connection should not pass the proxy, but go directly between the "remote system" and the "real server".
Problem is, if you want the connection to be truly direct, the "remote system" must know the IP address of of the "real server", and vice versa.
(I am assuming for the moment that nothing further can be done with
the remote system. Ie the proxy can't respond to the connection by
sending the IP address of the actual server and the remote connections
with that. )
Like I said, not possible. Why is it a problem to have the "proxy" send back the actual IP address?
Is it security - you want to make sure the connection really goes where the proxy wanted? If that's the case, you don't have an option - you have to compromise. Either the proxy forwards all the data, and it knows where the data is going, or let the client connect itself, but you don't have control where it connects.
Most networking problems can be solved as long as you have complete control over the entire network. Here, for instance, you could involve routers on the path between the "remote system" and the "real client", to make sure the connection is direct and that it goes where the proxy wanted. But this is complex, and probably not an option in practice (since you may not have control over those routers).
A compromise may be to have several "relay servers" distributed around the network that will forward the connections instead of having the actual proxy server forward them. When a proxy makes a decision, it finds the best (closest) relay server, tells it about the connection, then orders the client to connect to the relay server, which makes sure the connection goes where the proxy intended it to go.
There might be a way of doing this but you need to use a Windows driver to achieve it. I've not tried this when the connection comes from an IP other than localhost, but it might work.
Take a look at NetFilter SDK. There's a trial version which is fully functional up to 100000 TCP and UDP connections. The other possibility is to write a Windows driver yourself, but this is non-trivial.
http://www.netfiltersdk.com
Basically it works as follows:
1) You create a class which inherits from NF_EventHandler. In there you can provide your own implementation of methods like tcpConnectRequest to allow you to redirect TCP connections somewhere else.
2) You initialize the library with a call to nf_init. This provides the link between the driver and your proxy, as you provide an instance of your NF_EventHandler implementation to it.
There are also some example programs for you to see the redirection happening. For example, to redirect a connection on port 80 from process id 214 to 127.0.0.0:8081, you can run:
TcpRedirector.exe -p 80 -pid 214 -r 127.0.0.1:8081
For your proxy, this would be used as follows:
1) Connect from your client application to the proxy.
2) The connection request is intercepted by NetFilterSDK (tcpConnectRequest) and the connection endpoint is modified to connect to the server the proxy chooses. This is the crucial bit because your connection is coming from outside and this is the part that may not work.
Sounds like routing problem, one layer lower than TCP/IP;
You're actually looking for ARP like proxy:
I'd say you need to manage ARP packets, chekcing the ARP requests:
CLIENT -> WHOIS PROXY.MAC
PROXY -> PROXY.IP is SERVER.IP
Then normal socket connection via TCP/IP from client to server.

Ports with C++ Server/Client applications

If I create a c++ server/client application, the port I used to communicate does it need to be open on the router of the server and client machine
Or what other approach could I take? the client computer needs to receive information from the server but I am not able to have any ports opened because it is on a school network....
[edit]
Hmm My setup is a php page running on a server say when I press hello, the server makes a ssh connection through php and sends shell commands to the machine. The server is running off of a school server which I do have ssh access to and run all my things from there. The client computer will be my pc running off of the school wifi which is not connected to the server. The server will try to make a ssh connection to the public ip of my computer running off of the school wifi(no ports open/can ssh out but no ssh in). Will these methods you mention make this possible, in particular the connect.c since I can't run putty off of the server, and the connect.c I could call from the php.
The choice of language is highly irrelevant here.
There don't need to be ports 'open' on any router, unless your traffic must pass through it. On normal peer hosts in the same network (or subnet) there would hardly be any firewall policy, not even in schools.
Technically it is possible for the switch to block peer-2-peer traffic (meaning traffic not destined to the outgoing gateway), but that is not very usual.
Of course, if the school doesn't allow outbound (WAN) traffic on most ports, tough luck, and they're absolutely right :)
You can look at
ssh (with tunnels -L, -D and -R options, perhaps -o GatewayPorts on)
stunnel
connect.c
http-tunnel
All very readily googled
To establish a TCP/IP connection, only the server port needs to be accessible by the client. The connection is full-duplex, therefore data can flow from the client to the server and vice-versa.
If you are using UDP for your application, which is a connection-less protocol, what happens depends heavily on the firewall or router and whether it performs connection tracking for your service or not.
Unless you provide some additional information on your service and the network setup on both the client and the server side, we cannot provide more concrete information.

Resources