I can't clearly understand how Haproxy performs health checks in http mode.
I need to resend http request to another server (next in list of backend servers), if first one returned some error status code (for example, 503).
I need following behaviour of Haproxy:
1) I receive some HTTP request
2) I send it to the first server
3) If I get 503 (or some other error code), this HTTP request must be send to the next server
4) If It returns 200 code, next http requests of this tcp session goes to first server
I know it's easy to implement in nginx (using proxy_next_upstream, I suppose). But I need to use Haproxy, because software I need to connect works on the layer 4 and I can't change it, so it need to keep groups of http messages in the same tcp session. I can keep them in the same session in haproxy, but not in nginx.
I know about httpchk and observe, but they are not what I need.
First one allows me to send some http request, not http request I received (I need to analyse http traffic to decide what http status I will answer).
Second marks my servers as dead and doesn't send messages to it anymore, but I need this messages to by analysed.
I really need behaviour like nginx, but with ability to have http messages in the tcp sessions.
Probably there is some nice way to implement it with ACL?
Could anyone please give me a detailed explanation how haproxy handles load balancing in http mode or offer some solution to my problem?
UPDATE:
For example, when I tried to do it with observe, I used configuration:
global
log 127.0.0.1 local0
maxconn 10000
user haproxy
group haproxy
daemon
defaults
log global
option dontlognull
retries 3
maxconn 10000
contimeout 10000
clitimeout 50000
srvtimeout 50000
listen zti 127.0.0.1:1111
mode http
balance roundrobin
server zti_1 127.0.0.1:4444 check observe layer7 error-limit 1 on-error mark-down
server zti_2 127.0.0.1:5555 check observe layer7 error-limit 1 on-error mark-down
Thanks,
Dmitry
You can use the option httpchk.
When "option httpchk" is specified, a complete HTTP request is sent
once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure,
including the lack of any response.
listen zti 127.0.0.1:1111
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.0
server zti_1 127.0.0.1:4444 check inter 5s rise 1 fall 2
server zti_2 127.0.0.1:5555 check inter 5s rise 1 fall 2
Source: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20httpchk
Related
I want to make the client of proxy server keepAlive. Thus, I don't want the proxy client to make a tcp close handshake everytime.
Please look at this example in netty.
Adding the keepAlive option to this example doesn't seem to work properly. Because it makes a client and connect everytime the server get request and close the client when the response is arrived.
Then how can I make my proxy client keepAlive? Is there any reference/example for it?
Using SO_KEEPALIVE socket option doesn't mean that the server (or the other peer in the connection) should ignore an explicit request to close the connection. It helps in cases like
Idle sessions timing-out/getting killed by the other end due to non-activity
Idle or long-running requests being disconnected by a firewall in-between after a certain time passes (e.g. 1 hour, for resource clean-up purposes).
If the client's logic is not to re-use the same socket for different requests (i.e. if its application logic uses a new socket for each request), there's nothing you can do about that on your proxy.
The same argument is valid for the "back-end" side of your proxy as well. If the server you're proxying to doesn't allow the socket to be re-used, and explicitly closes a connection after a request is served, that wouldn't work as you wanted either.
If you are not closing the connection on your side then the proxy is. Different proxy servers will behave in different ways.
Try sending Connection: Keep-Alive as a header.
If that doesn't work, try also sending Proxy-Connection: Keep-Alive as a header.
Case/Assumption:
There is a server that is written by someone else.
This server has an endpoint GET /api/watch.
This endpoint is plain HTTP/1.1
This endpoint will write events like
{type:"foo", message:"bar"}
to the response stream once they appear (one event per line and then a flush).
Sometimes this server writes events every second to the output, sometimes every 15 minutes.
Between my client and this server there is a third-party Load Balancer which assumes a connection as staling if there is no action on the connection for more than 60 seconds and drops the connection without closing it.
The client is written in simple Golang and simply makes a GET request to this endpoint.
Once the connection is marked by the LB as staled the client (the same happens to curl, too) is not notified that the connection was dropped by the LB and is still waiting for stuff to receive in the response of the GET request.
So: What are my possibilities to deal with this situation?
What is not possible:
Modify the server.
Use another server.
Use something else than this endpoint and how it is written.
Modify the Load Balancer.
Use another LB.
Leave the LB out of the connection.
15 minutes is an incredibly long quiet period for basic HTTP - possibly better-suited to WebSockets. Short of a protocol change, you should be able to adjust the timeout period on the load balancer (hard to say since you didn't specify what the LB is) to better suit your use case, though not all load balancers will allow timeouts as high as 15 minutes. If you can't change the protocol and can't turn the timeout up high enough, you would have to send keepalive messages from the server every so often (just short of the timeout period, so maybe 55s with your current config, or a little less than the highest timeout period you're able to set on the LB). This would have to be something the client knew to discard, like {"type": "keepalive"} - something easily identifiable on the client side as a "fake" message for keepalive purposes.
We use haproxy as http load balancer. Sometimes one of our servers stop responding while accepting http connection requests. So stats page displays servers as green "accessible" but our nagios server says CRITICAL - Socket timeout after 20 seconds" and that server is not responding actually.
How to tell haproxy to check page response time and if it takes longer than timeout then tag it as DOWN.
Since you did not specify what kind of health check you are running, I can only direct you to the manual that explains all kinds.
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html
Concentrate on setting the correct Timeout for your health check.
What is the use of keep alive option in Jmeter and ow its working ?
I did a performance test using Jmeter 3.0
In my recorded script Keep alive option is checked.
So i use keep alive option checked in my real test script
If i use keep alive option i got error with in 75 concurrent VU's
Error message : XXX.XXXX.XXXX:XXX server refused to respond
if i un check keep alive option i can able to go up to 500 VU's without error.
In this case do we need to use Keep alive option or not ?
Keep-alive is an HTTP feature to keep a persistent connection between round trips, so that it does not initiate a new one on every single request. This feature has many benefits, but one of the trade-offs is that it holds resources on the server side and that can be an issue under heavy load.
In your case, I guess that you've simply consumed all resources on the server with 75 opened connections and that it can't serve further requests. This error does not necessarily mean your server can't serve more than 75 connections, because it all depends on your HTTP server config.
Example of an Apache config:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 100
Keep alive on wikipedia
Ugh! Just ran into this. According to JMeter's documentation:
http://svn.apache.org/repos/asf/jmeter/tags/v5_1_RC2/docs/usermanual/component_reference.html
JMeter sets the Connection: keep-alive header. This does not work
properly with the default HTTP implementation, as connection re-use is
not under user-control. It does work with the Apache HttpComponents
HttpClient implementations.
In other words, JMeter will send the header, but with the default implementation it will not re-use connections.
To use KeepAlive effectively, you need to select 'HttpClient4' from Implementation dropdown on Advanced tab.
HTTP keep-alive i.e. HTTP persistent connection, is an instruction that allows a single TCP connection to remain open for multiple HTTP requests/responses.
We have a web server and a client, both written in go, that interact with each other. We want HAProxy to load balance requests between several instance of the server, but it's not working. The client will always connect to the same server while it's still up.
If I look at the output of "netstat -anp", I can see that there is a persistent connection that was established between the client and the sever through HAProxy. I tried setting the Connection Header in the response to "close", but that didn't work at all.
Needless to say, I'm completely confused by this. My first question is, is this a problem with the client, server, or HAProxy? How does one force the client to disconnect? Am I missing something regarding this? Curl works fine, so I know that HAProxy does load balance, but curl also completely shuts down when finished, hence why I'm suspecting it's the persistent connection that's causing me issues since the client and server are long running.
Just as an FYI, I'm using go-martini on the server.
Thanks.
HTTP/1.1 uses KeepAlive by default. Since the connections aren't closed, HAProxy can't balance the requests between different backends.
You have a couple options:
Force the connection to close after each request in your code. Setting Request.Close = true on either the client or the server will send a Connection: close header, telling both sides to close the tcp connection.
Alternatively you could have HAPoxy alter the requests by setting http-server-close so the backend is closed after each request, or http-closeto shutdown both sides after each request.
http-server-close is usually the best option, since that still maintains persistent connections for the client, while proxying each request individually.