I used websocket c++ code connects to Nitrous.io box(nodes.js server) on port 8888.
Everything works well, only if there is no transfer between server and client after a certain time(1 minute) the connection is closed.
The same client works well if server hosts on my other server rather than Nitrous.io box
Just wonder if there is any limitation Nitrous.io websocket?
Currently within nginx.conf of the Nitrous boxes, the value keepalive_timeout 5 is most likely causing the timeout of your WebSockets connection.
Unfortunately without root access this cannot be adjusted, but we (the Nitrous team) will look into increasing this value in an update. I'll update this thread once we have an update.
Related
We have a websocket server for real-time communication deployed on Kubernetes. when the server redeployed, all connection will be disconnected. And the client will reconnect when detect connection closed. At this time, the server will receive a large amount of traffic(about 200K connections) instantly, it may make the server overloaded.
We try to add the time interval between restarting multiple pods, but the reconnect requests at the same time are still higher than expected, and this way is not work if out server version is not forward compatible.
Is there any way to solve the problem of websocket server deployment in this scenario?
Or is there any way to be able to stay connected while redeploying the server?
We have a web server and a client, both written in go, that interact with each other. We want HAProxy to load balance requests between several instance of the server, but it's not working. The client will always connect to the same server while it's still up.
If I look at the output of "netstat -anp", I can see that there is a persistent connection that was established between the client and the sever through HAProxy. I tried setting the Connection Header in the response to "close", but that didn't work at all.
Needless to say, I'm completely confused by this. My first question is, is this a problem with the client, server, or HAProxy? How does one force the client to disconnect? Am I missing something regarding this? Curl works fine, so I know that HAProxy does load balance, but curl also completely shuts down when finished, hence why I'm suspecting it's the persistent connection that's causing me issues since the client and server are long running.
Just as an FYI, I'm using go-martini on the server.
Thanks.
HTTP/1.1 uses KeepAlive by default. Since the connections aren't closed, HAProxy can't balance the requests between different backends.
You have a couple options:
Force the connection to close after each request in your code. Setting Request.Close = true on either the client or the server will send a Connection: close header, telling both sides to close the tcp connection.
Alternatively you could have HAPoxy alter the requests by setting http-server-close so the backend is closed after each request, or http-closeto shutdown both sides after each request.
http-server-close is usually the best option, since that still maintains persistent connections for the client, while proxying each request individually.
My application consists of two pieces: WebSocket server - which is hosted on OpenShift DIY cartridge; WebSocket client - which connects to my server from home PC. WebSocket server is written using embedded Jetty and its library for WebSockets. Client side is written using JAVA and Tyrus library. It works pretty well except for one glitch that I cannot explain.
When running WebSocket server on OpenShift DIY cartridge, WebSocket connection gets dropped every 2 min. Connection drops happen quite precisely so obviously it is not related to potential network outages. Besides I have tested exactly the same application on Heroku and there were no connection drop. Moreover onClose(...) method receives NORMAL_CLOSURE close code.
I am almost sure that OpenShift Apache layer closes idle WebSocket connections every 2 min. even though WebSocket client sends Ping messages and receives Pong messages from the server. Has anyone experienced this type of WebSocket connection drops? Are there are parameters I can use to prevent connection drops?
Thank you in advance.
Update: I added a dedicated thread on the server side to send Pong messages to the client (Jetty does not support Pong handlers yet so I cannot use Ping messages) and drops disappeared. It seems like OpenShift Apache layer started treating connection as "alive" and does not close it. Then I noticed one more strange behavior: someone ping my server side application via HTTPS every hour. HTTP headers look like this:
HTTP/1.1 HEAD /
Accept: /
User-Agent: Ruby
X-Forwarded-Proto: https
X-Forwarded-Host: ....rhcloud.com
Connection: keep-alive
X-Request-Start: t=1409771442217677
X-Forwarded-For: 10.158.21.225
Host: wsproxy-gimes4dieni.rhcloud.com
X-Forwarded-Port: 443
X-Client-IP: 10.158.21.225
X-Forwarded-SSL-Client-Cert: (null)
X-Forwarded-Server: localhost
I do not use Ruby, I am using only HTTP and IP address is different from my regular requests. Does anybody has a clue whether this is some sort of OpenShift "service" of this is coming from the Internet?
SSH into your project, open ~/haproxy/conf/haproxy.cfg with a text editor such as vi and edit timeout queue,timeout client, and timeout server to whatever you want. I set mine to 5m, which is 5 minutes. After you have made the changes, exit and run
~/haproxy/bin/control restart
Now your websocket timeout should be set.
Case:
A WebSocket connection have been established between the client and server endpoint.
Now I have the network connection go down (for example the ADSL dies), after 10 min I recover the network, I find that the client and server are still able to communicate with each
other. Why?
Note:
The client was developed with Java-WebSocket framework, and the client did with ws4py.
1 - If they did not try to exchange any data and only the connection (not the endpoints) between them is down, this is normal behaviour.
2 - If the websocket connection ended, Browser may have re-established it without you knowing about it. I just checked that this is not normal behaviour. But maybe there is some parameter somewhere :-)
I have the following system:
A Windows 2003 server running WebSphere Application Server, listening on port 8080.
A lot of clients of this server.
I tried a loads test - making clients connect to the server and asking for services. This didn't end well: Many clients were denied service and the server started reporting it was unable to create new sockets.
My question is which parameters should I change in my Windows?
I thought about number of connections, but I am not sure this exists on 2003 (from what I have read). Instead, there is a number of userPorts, which I don't think is what I need, since I am only using one port (8080) on the server side.
Am I wrong assuming that I am only using one port in the server side?
Are there parameters for number of connections per port, per system, or perhaps this is affected by the amount of data transferred. I pass a lot of data, so a reference to amount of data (if there is such a parameter that might limit, I am glad to hear it).
Should I also reduce the amount of wait each connection waits after tear down? This may allow the pool of connections to be more available. If so which Parameter is this?
Any other parameters that are consistent with this problem?