About comet Long TCP connection and perforance - performance

I am new to comet,and have two questions:
I think comet will cause the TCP connection between client and server become long(than normal request/response),this will reduce server performance?(server has TCP connection size limit)
And sometimes the nature of the device or network can prevent an application from maintaining a long-lived TCP connection to a server.how comet aviod this issue?

On Linux (epoll) or BSD (kqueue), you can have hundreds of thousands of idle connections without a performance pennalty (except memory usage). The same is not true on other systems which hit the wall much earlier: because of the limited pool of Windows handles allocated for this purpose in the kernel, your applications will suffer (unless you invest in an 'unlimited' Windows Server license).
Proxy servers notably (low-end routers also), will cut idle connections after a short delay but the usual workaround is to use connection keep-alives.
Hope it helps.

Related

What would happen if a process established multiple PostgreSQL connections and terminated without closing them?

I'm writing a DLL for a purchased software.
The software will perform multi-threaded calculations on certain tasks.
My job is to output the relative result into a database.
However, due to the limited support of the software, it is kind of difficult to do multi-threaded output of the data.
The key problem is that there is no info on the last execution of the DLL function.
Therefore, the database connection will not be closed.
So may I ask if I leave the connection open and terminate the process, what would be the potential problems?
My platform is winserver 2008, and PostgreSQL 10.
I don't understand the background information you are giving, but I can answer the question:
If a PostgreSQL client process dies without closing the database (and TCP) connection, the PostgreSQL server process (“backend process”) that servers this connection will not realize this immediately.
Of course, as soon as the server tries to communicate to the client, e.g. to return some results, TCP it will notice that the partner has gone away and will return an error.
However, often the backend process is idle, waiting for the client to send the next request. In this case, it would never notice that its partner has died. This could eventually cause max_connections to be exhausted with dead connections.
Because this is a common problem in networking, TCP provides the “keepalive” functionality: when a connection has been idle for a while (2 hours by default), the operating system will send a so-called “keepalive packet” and wait for a response from the other side. Sending keepalive packets is repeated several times (5 times by default) in short intervals (1 second by default), and if no response is received, the connection is closed by the operating system, the backend process receives an error message and terminates.
PostgreSQL provides parameters with which you can configure these settings on the server side: tcp_keepalives_idle, tcp_keepalives_count and tcp_keepalives_interval. If you set tcp_keepalives_idle to a shorter value, dead connections will be detected and removed faster, at the cost of some little added network traffic.

Will a TCP connection always recover eventually? How quickly?

I'm thinking about a simple scheme to transfer data over a very unreliable network: Simply disable all TCP timeouts or set them to extreme values such as a day.
If the network fails temporarily (router unplugged etc.) will the TCP connection recover eventually? How quickly will that happen?
I'm not concerned about transfer speed consistency. Very long pauses are OK.
If this is OS specific then I'd like to know for Windows.

On the linux network socket server machine, what happens when all network ports are allocated for clients

On the linux network socket server machine, what happens when all network ports are allocated for clients? If it happens, the connection request from clients are denied, or delayed? If that's right, is it right to think that one linux machine can serve at most the number of open ports simultaneously? (under assumption that all other resources are enough)
If that's right, is it right to think that one linux machine can serve at most the number of open ports simultaneously?
No, the port is not the limiting factor here. TCP connected socket is actually a quintuple (src_port, src_address, dest_port, dest_address, protocol).
So, for every server listening on one port, every client will be able to make whatever is set in ip_local_port_range connections using the same protocol.
However, you can work around this - if you have more IP addresses (you could use IP aliasing for this, even if you don't have more than one interface), or if your server is listening on more than one port, the number of possible connections goes up.
Resources:
http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html

Sockets leaked in windows not shown in netstat and tcpview

Is it possible that windows leaks sockets connection and these sockets are not shown in tcpview and netstat?
After running a few applications that perform many network connections, my windows machine enters a state in whitch it in not able to open any new socket connection. Even to itself (localhosts).
For example, telnet to a local application failed because windows can't create new sockets.
Closing and restarting the network applications does not helps. Only full windows restart solves the problem.
netstat (& tcpview) indicates that there are only some dozens of connections.
Thanks for your help.
No, it is not possible for those apps to miss leaked connections. Something else is going on. Maybe you are not looking at their detailed views, like seeing closed sockets that are in TIME_WAIT state. If you cannot open new socket connections, you mostly likely are encountering port exhaustion. Wait some time for ports to time out and become available again. Or stop wasting ports in the first place.

PostgreSQL UNIX domain sockets vs TCP sockets

I wonder if the UNIX domain socket connections with postgresql are faster then tcp connections from localhost in high concurrency rate and if it does, by how much?
Postgres core developer Bruce Momjian has blogged about this topic. Momjian states, "Unix-domain socket communication is measurably faster." He measured query network performance showing that the local domain socket was 33% faster than using the TCP/IP stack.
UNIX domain sockets should offer better performance than TCP sockets over loopback interface (less copying of data, fewer context switches), but I don't know whether the performance increase can be demonstrated with PostgreSQL.
I found a small comparison on the FreeBSD mailinglist: http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html.
I believe that UNIX domain sockets in theory give better throughput than TCP sockets on the loopback interface, but in practice the difference is probably negligible.
Data carried over UNIX domain sockets don't have to go up and down through the IP stack layers.
re: Alexander's answer. AFAIK you shouldn't get any more than one context switch or data copy in each direction (i.e. for each read() or write()), hence why I believe the difference will be negligble. The IP stack doesn't need to copy the packet as it moves between layers, but it does have to manipulate internal data structures to add and remove higher-layer packet headers.
afaik, unix domain socket (UDS) work like system pipes and it send ONLY data, not send checksum and other additional info, not use three-way handshake as TCP sockets...
ps: maybe UDS will be more faster
TCP sockets on localhost are usually implemented using UNIX domain sockets, so the answer on most systems is neglijable to none. However, this is not standard in any way -- it is just how usually it is done, therefore you should not depend on this.

Resources