We have a web server and a client, both written in go, that interact with each other. We want HAProxy to load balance requests between several instance of the server, but it's not working. The client will always connect to the same server while it's still up.
If I look at the output of "netstat -anp", I can see that there is a persistent connection that was established between the client and the sever through HAProxy. I tried setting the Connection Header in the response to "close", but that didn't work at all.
Needless to say, I'm completely confused by this. My first question is, is this a problem with the client, server, or HAProxy? How does one force the client to disconnect? Am I missing something regarding this? Curl works fine, so I know that HAProxy does load balance, but curl also completely shuts down when finished, hence why I'm suspecting it's the persistent connection that's causing me issues since the client and server are long running.
Just as an FYI, I'm using go-martini on the server.
Thanks.
HTTP/1.1 uses KeepAlive by default. Since the connections aren't closed, HAProxy can't balance the requests between different backends.
You have a couple options:
Force the connection to close after each request in your code. Setting Request.Close = true on either the client or the server will send a Connection: close header, telling both sides to close the tcp connection.
Alternatively you could have HAPoxy alter the requests by setting http-server-close so the backend is closed after each request, or http-closeto shutdown both sides after each request.
http-server-close is usually the best option, since that still maintains persistent connections for the client, while proxying each request individually.
Related
In a Webserver for basic static website non-blocking event-driven, I don't understand the mechanics I should implement for a "new client".
When a browser connects to my socket, I get the clientfd from accept and answer with a HTTP response, but when the browser is reloaded, should it create a new connection and answer, or should it reuse the same connection and just send the new response?
I use poll to handle multiple fds, but when I reload the page its the same connection (for me this makes sense) but then I open a new tab, and it's still the same connection (It only does accept once). I'm not finding any documentation on this, and I don't have a way to test with multiple client's if it reuses the same one every time.
You can't reuse a connection from another client, new connections must always be accepted as new connections. It doesn't matter what kind of server application you're writing.
However, if the client passes the header Connection: keep-alive you should not close the connection once the response is finished, but keep the connection open for future requests from the same client.
I hope i understand correctly,
but anyway, What i personally do is create a map of sockets, each socket is a client.
Every time a socket disconnects, it's being removed from that map... and so on...
Whether to use a new connection is the browser's choice. You don't get much of a choice.
However, you can tell the browser that you don't allow it to reuse a connection, if you send Connection: close in the response. In this case, the browser is forced to open a new connection for the next request. This is the only control you have.
If you want to test several connections at the same time, you could open several different browsers, or you could use a different program, such as some HTTP load testing tool (there are many). You could also send it a web page with many images; browsers should try to download all the images using several connections at the same time.
A web server doesn't create clients. A web server has clients -- new clients trying to connect, and existing clients communicating on the sockets that it has already opened.
To handle new clients, a web server should pretty much be calling accept all the time, unless it's already handling the maximum number of clients that it's configured to handle.
As soon as you get a new connection from accept, hand it off to other threads to process and call accept again.
I want to make the client of proxy server keepAlive. Thus, I don't want the proxy client to make a tcp close handshake everytime.
Please look at this example in netty.
Adding the keepAlive option to this example doesn't seem to work properly. Because it makes a client and connect everytime the server get request and close the client when the response is arrived.
Then how can I make my proxy client keepAlive? Is there any reference/example for it?
Using SO_KEEPALIVE socket option doesn't mean that the server (or the other peer in the connection) should ignore an explicit request to close the connection. It helps in cases like
Idle sessions timing-out/getting killed by the other end due to non-activity
Idle or long-running requests being disconnected by a firewall in-between after a certain time passes (e.g. 1 hour, for resource clean-up purposes).
If the client's logic is not to re-use the same socket for different requests (i.e. if its application logic uses a new socket for each request), there's nothing you can do about that on your proxy.
The same argument is valid for the "back-end" side of your proxy as well. If the server you're proxying to doesn't allow the socket to be re-used, and explicitly closes a connection after a request is served, that wouldn't work as you wanted either.
If you are not closing the connection on your side then the proxy is. Different proxy servers will behave in different ways.
Try sending Connection: Keep-Alive as a header.
If that doesn't work, try also sending Proxy-Connection: Keep-Alive as a header.
When an HTTP proxy server is used, is the number of connections negotiated between the client and the proxy reduced as compared to the client connecting directly to various http sites directly (without proxy)?
For example, when connecting directly to two different domains, it is clear that at least two connections must be made. In the case of a proxy, does the client usually use a single connect to the proxy for both "connections"?
Similarly, are there cases where a client that connecting to a single domain but accessing several resources would see a reduced number of connections using a proxy? E.g., can the proxy present a HTTP/1.1-style persistent connect even when the ultimate destination doesn't support it? Are proxies able to use longer persistent connection timeout periods?
In the case of a proxy, does the client usually use a single connect to the proxy for both "connections"?
While it would possible to use the same connection to a HTTP proxy to include HTTP requests to different targets most clients don't do it from what I've seen. Also, it would only work with HTTP and not HTTPS since in the latter case the whole TLS connection to the target is tunneled through the proxy and the close of this tunneled connection is also the close of the underlying TCP connection to the proxy. And, HTTP requests to multiple targets would only be possible with a HTTP proxy but not a SOCKS proxy since SOCKS essentially builds a tunnel to a specific target and this target is set at the beginning of the connection and can never be changed.
That said, while I've not seen it for browser to proxy connections I've seen a patched squid used (long ago) to do this in order to optimize proxy to proxy connections.
E.g., can the proxy present a HTTP/1.1-style persistent connect even when the ultimate destination doesn't support it?
While this would be possible too it is also not common. Usually the proxy does not fully decouple client and server, i.e. a server-triggered close of the connection between server and proxy usually results in close of the connection between proxy and client too. The reason is probably that it would work for only for HTTP anyway and not HTTPS and that it makes the implementation of the proxy more complex since things like repeating a request on sudden close of a persistent connection by the server between requests would now need to be handled by the proxy instead of simply forwarding the close and let the client deal with it.
How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?
My application consists of two pieces: WebSocket server - which is hosted on OpenShift DIY cartridge; WebSocket client - which connects to my server from home PC. WebSocket server is written using embedded Jetty and its library for WebSockets. Client side is written using JAVA and Tyrus library. It works pretty well except for one glitch that I cannot explain.
When running WebSocket server on OpenShift DIY cartridge, WebSocket connection gets dropped every 2 min. Connection drops happen quite precisely so obviously it is not related to potential network outages. Besides I have tested exactly the same application on Heroku and there were no connection drop. Moreover onClose(...) method receives NORMAL_CLOSURE close code.
I am almost sure that OpenShift Apache layer closes idle WebSocket connections every 2 min. even though WebSocket client sends Ping messages and receives Pong messages from the server. Has anyone experienced this type of WebSocket connection drops? Are there are parameters I can use to prevent connection drops?
Thank you in advance.
Update: I added a dedicated thread on the server side to send Pong messages to the client (Jetty does not support Pong handlers yet so I cannot use Ping messages) and drops disappeared. It seems like OpenShift Apache layer started treating connection as "alive" and does not close it. Then I noticed one more strange behavior: someone ping my server side application via HTTPS every hour. HTTP headers look like this:
HTTP/1.1 HEAD /
Accept: /
User-Agent: Ruby
X-Forwarded-Proto: https
X-Forwarded-Host: ....rhcloud.com
Connection: keep-alive
X-Request-Start: t=1409771442217677
X-Forwarded-For: 10.158.21.225
Host: wsproxy-gimes4dieni.rhcloud.com
X-Forwarded-Port: 443
X-Client-IP: 10.158.21.225
X-Forwarded-SSL-Client-Cert: (null)
X-Forwarded-Server: localhost
I do not use Ruby, I am using only HTTP and IP address is different from my regular requests. Does anybody has a clue whether this is some sort of OpenShift "service" of this is coming from the Internet?
SSH into your project, open ~/haproxy/conf/haproxy.cfg with a text editor such as vi and edit timeout queue,timeout client, and timeout server to whatever you want. I set mine to 5m, which is 5 minutes. After you have made the changes, exit and run
~/haproxy/bin/control restart
Now your websocket timeout should be set.