I have the following system:
A Windows 2003 server running WebSphere Application Server, listening on port 8080.
A lot of clients of this server.
I tried a loads test - making clients connect to the server and asking for services. This didn't end well: Many clients were denied service and the server started reporting it was unable to create new sockets.
My question is which parameters should I change in my Windows?
I thought about number of connections, but I am not sure this exists on 2003 (from what I have read). Instead, there is a number of userPorts, which I don't think is what I need, since I am only using one port (8080) on the server side.
Am I wrong assuming that I am only using one port in the server side?
Are there parameters for number of connections per port, per system, or perhaps this is affected by the amount of data transferred. I pass a lot of data, so a reference to amount of data (if there is such a parameter that might limit, I am glad to hear it).
Should I also reduce the amount of wait each connection waits after tear down? This may allow the pool of connections to be more available. If so which Parameter is this?
Any other parameters that are consistent with this problem?
Related
I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.
We're having a connection timeout issue from an API pooling connections to an informix connection manager which forwards the queries to the appropriate informix database server.
Recently, I've set up the mail service and realized that we're having delays in receiving the mail send and after troubleshooting I saw that the database server is not syncronized at all with the API ( 2+ minutes difference ).
I've read somewhere that time sync is important when using jdbc pooling but I can't find to much information regarding this on internet. The timeout kinda makes sense because of the tcp keepalive.
Had anyone experienced or know about this ?
Thank you,
Mihai.
It is common to intermix database timestamps and local timestamps. This causes issues when the server times are different. If the mail server is looking for records before the current time, there could be a two minute delay before mail is sent.
Email may be delayed in transit between servers. Check the Received headers to see if there are any unexpected delays. (You will need to compensate for time variances on the servers.
Normally, you would use NTP to ensure the time is the same on all servers. Within a data center it should be able to synchronize times to a millisecond or so.
I have TCPListener server based on this source code https://gist.github.com/leandrosilva/656054#file-server-cs
I created a server on port 3340. Whenever a client connects to the server, then server waits for the new client connection. When I connect from my Chrome browser to the server, then it seems there are three clients connected (expected only one).
Why it is like that?
Most clients maintain multiple connections in parallel, including more than one connection per server endpoint.
And RFC7230 section-6.4 explains. Multiple connections are typically used to avoid the "head-of-line blocking" problem
We have a node instance that has about 2500 client socket connections, everything runs fine except occasionally then something happens to the service (restart or failover event in azure), when the node instances comes back up and all socket connections try to reconnect the service comes to a halt and the log just shows repeated socket connect/disconnects. Even if we stop the service and start it the same thing happens, we currently send out a package to our on premise servers to kill the users chrome sessions then everything works fine as users begin logging in again. We have the clients currently connecting with 'forceNew' and force web sockets only and not the default long polling than upgrade. Any one ever see this or have ideas?
In your socket.io client code, you can force the reconnects to be spread out in time more. The two configuration variables that appear to be most relevant here are:
reconnectionDelay
Determines how long socket.io will initially wait before attempting a reconnect (it should back off from there if the server is down awhile). You can increase this to make it less likely they are all trying to reconnect at the same time.
randomizationFactor
This is a number between 0 and 1.0 and defaults to 0.5. It determines how much the above delay is randomly modified to try to make client reconnects be more random and not all at the same time. You can increase this value to increase the randomness of the reconnect timing.
See client doc here for more details.
You may also want to explore your server configuration to see if it is as scalable as possible with moderate numbers of incoming socket requests. While nobody expects a server to be able to handle 2500 simultaneous connections all at once, the server should be able to queue up these connection requests and serve them as it gets time without immediately failing any incoming connection that can't immediately be handled. There is a desirable middle ground of some number of connections held in a queue (usually controllable by server-side TCP configuration parameters) and then when the queue gets too large connections are failed immediately and then socket.io should back-off and try again a little later. Adjusting the above variables will tell it to wait longer before retrying.
Also, I'm curious why you are using forceNew. That does not seem like it would help you. Forcing webSockets only (no initial polling) is a good thing.
I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).