Private gke cluster and external HTTPS loadbalancer health checks failing - https

I have tested a neg deployment in a private and public cluster, however I cannot get the private cluster to work correctly with external loadbalancer, even with suggested fw rules created.
deployment of private cluster fw rules below:
firewall-rules --allow tcp:30000-32767,tcp:9376 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
anyone who has done anything similar would be great to have some advice

When you choose HTTP, HTTPS or HTTP/2 as the protocol for the health check, the probes will require an HTTP 200 (OK) response code to be successful and after some retries (also specified in the health check) your backend service will be considered as healthy, each protocol has it own success criteria and need to be taken into consideration for your specific use case:
HTTP, HTTPS, HTTP/2
TCP and SSL
gRPC
For example, if you have a service (not http) running in a specific port, typical use case are databases (mysql, postgree) and you want to ensure that the service is working, you can use a TCP health check, if the handshake is completed, the probe will be successful and request will reach your backends.
Also, you need to configure your firewall rules to allow traffic from the specific ranges, this vary according the LB type (Network Load Balancers require different ranges, this is detailed here), otherwise probes will not be able to reach your backends and will be marked unhealthy.

Related

How is this website is still rate limiting despite me using multiple proxies?

I have 10 proxies servers in AWS EC2, created using squid proxy.
I have also set forwarded_for to off in this, to ensure that is can't be detected as a proxy.
The load balancer (nginx in EC2) is using a round robin algorithm to ensure equal distribution among proxies among requests.
I am ending requests to a public API.
I have tested my proxies by creating my own server (python flask) and checking the requests....request.remote_addr are the ip addresses of the proxy servers as expected and is using round robin.
There are no headers which include the source ip address. And in fact, by looking at request.dict in python, I cannot see the source IP address anywhere.
How is the API still rate limiting me? The rate limit error occurs after the same number of requests whether I use proxies or not.

how to distribute requests of JMeter test plan among 2 nodes which are behind the Load balancer, Both node's IP is not public IP?

I am using JMeter 3.1
We have a load balancer with public IP (i.e: 192.87.00.00) having SSL implemented and we use that IP to communicate with LB and
LB will decide which node has currently least number requests so it will get that call.
Behind LB there are 2 nodes with non-public IP and non-secure protocol and in both nodes we implement session replication.
Whenever i run my JMeter Test then all my request went to any single node every time as per the configuration settings of LB. Now i
have been asked to design a test plan in which all requests distributed among both nodes randomly.
I created following test:
Test Plan
DNS Cache Manager
HTTP Cookie Manager
HTTP Cache Manager
Thread Group
Req 1
Req 2
Req 3
Test Plan and DNS Cache Manager
TG and HTTP request
In the http request i put the Load balancer Public IP, Port and select "httpClient4" in Implementation dropdown.
In the DNS Cache Manager i select "Use custom DNS resolver" and in DNS Server section i define IP addresses of both Nodes.
When i run my test plan i noticed that all my requests are goes to single node. i verified this from tailing both nodes tomcat
log in a putty console and to see which node is getting the request.
I study the DNS Cache Manager in Apache JMeter help and some blogs, i implement what i learnt please help me in this regard.
thanks!
The keep-alive is on in your HTTP Sampler. Turn it off.
I don't know what LB you're using, though I presume it works on TCP level rather then terminate HTTP(S) there.
So in this case, it just tunnels the packets to the actual servers. And with keep-alive, it obviously sticks to whatever one it choose at the beginning.

GCE: Both TCP and HTTP load balancers on one IP

I'm running a kubernetes application on GKE, which serves HTTP requests on port 80 and websocket on port 8080.
Now, HTTP part needs to know client's IP address, so I have to use HTTP load balancer as ingress service. Websocket part then has to use TCP load balancer, as it's clearly stated in docs that HTTP LB doesn't support it.
I got them both working, but on different IPs, and I need to have them on one.
I would expect that there is something like iptables on GCE, so I could forward traffic from port 80 to HTTP LB, and from 8080 to TCP LB, but I can't find anything like that. Anything including forwarding allows only one them.
I guess I could have one instance with nginx/HAproxy doing only this, but that seems like an overkill
Appreciate any help!
There's not a great answer to this right now. Ingress objects are really HTTP only right now, and we don't really support multiple grades of ingress in a single cluster (though we want to).
GCE's HTTP LB doesn't do websockets yet.
Services have a flaw in that they lose the client IP (we are working on that). Even once we solve this, you won't be able to use GCE's L7 balancer because of the extra port you need.
The best workaround I can think of, and has been used by a number of users until we preserve source IP, is this:
Run your own haproxy or nginx or even your own app as a Daemonset on some or all nodes (label controlled) with HostPorts.
Run a GCE Network LB (outside of Kubernetes) pointing at the nodes with HostPorts.
Once we can properly preserve external IPs, you can turn this back into a plain Service.

WebSockets and Load Balancing, a bottleneck?

When having a bunch of systems that act as WebSocket drones and a Load Balancer in front of those drones. When a WebSocket request comes into the LB it chooses a WebSocket drone, and the WebSocket is established. (I use AWS ELB tcp SSL-terminated at ELB)
Question:
Now does the created WebSocket go through the LB, or does the LB forward the WebSocket request to a WebSocket drone and thus there is a direct link between client and a WebSocket drone?
If the WebSocket connection goes through the LB, this would make the LB a huge bottleneck.
Removing the LB and handing clients a direct IP of a WebSocket drone could circumvent this bottleneck but requires creating this logic myself, which I'm planning to do (depending on this questions' answers).
So are my thoughts on how this works correct?
AWS ELB as LB
After looking at the possible duplicate suggested by Pavel K I conclude that the WebSocket connection will go through the AWS ELB, as in:
Browser <--WebSocket--> LB <--WebSocket--> WebSocketServer
This makes the ELB a bottleneck, what I would have wanted is:
Browser <--WebSocket--> WebSocketServer
Where the ELB is only used to give the client a hostname/IP of an available WebSocketServer.
DNS as LB
The above problem could be circumvented by balancing on DNS-level, as explained in the possible duplicate. Since that way DNS will give an IP of an available WebSocketServer when ws.myapp.com is requested.
Downside is that this would require constantly updating DNS with up/down WebSocketServer changes (if your app is elastic this becomes even more of a problem).
Custom LB
Another option could be to create a custom LB which constantly monitors WebSocketServers and gives back the IP of an available WebSocketServer when the client requests so.
Downside is that the client needs to perform a separate (AJAX) request to get the IP of an available WebSocketServer, whereas with AWS ELB the Load Balancing happens implicitly.
Conclusion
Choosing the better evil..

What does the Amazon ELB automatic health check do and what does it expect?

Here is the thing:
We've implemented a C++ RESTful API Server, with built-in HTTP parser and no standard HTTP server like apache or anything of the kind
It has been in use for several months in Amazon structure, using both plain and SSL communications, and no problems have been identified, related to Amazon infra-structure
We are deploying our first backend using Amazon ELB
Amazon ELB has a customizable health check system but also as an automatic one, as stated here
We've found no documentation of what data is sent by the health check system
The backend simple hangs on the socket read instruction and, eventually, the connection is closed
I'm not looking for a solution for the problem since the backend is not based on a standard web server, just if someone knows what kind of message is being sent by the ELB health check system, since we've found no documentation about this, anywhere.
Help is much appreciated. Thank you.
Amazon ELB has a customizable health check system but also as an
automatic one, as stated here
With customizable you are presumably referring to the health check configurable via the AWS Management Console (see Configure Health Check Settings) or via the API (see ConfigureHealthCheck).
The requirements to pass health checks configured this way are outlined in field Target of the HealthCheck data type documentation:
Specifies the instance being checked. The protocol is either TCP,
HTTP, HTTPS, or SSL. The range of valid ports is one (1) through
65535.
Note
TCP is the default, specified as a TCP: port pair, for example
"TCP:5000". In this case a healthcheck simply attempts to open a TCP
connection to the instance on the specified port. Failure to connect
within the configured timeout is considered unhealthy.
SSL is also specified as SSL: port pair, for example, SSL:5000.
For HTTP or HTTPS protocol, the situation is different. You have to
include a ping path in the string. HTTP is specified as a
HTTP:port;/;PathToPing; grouping, for example
"HTTP:80/weather/us/wa/seattle". In this case, a HTTP GET request is
issued to the instance on the given port and path. Any answer other
than "200 OK" within the timeout period is considered unhealthy.
The total length of the HTTP ping target needs to be 1024 16-bit
Unicode characters or less.
[emphasis mine]
With automatic you are presumably referring to the health check described in paragraph Cause within Why is the health check URL different from the URL displayed in API and Console?:
In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed. [emphasis mine]
The paragraph Solution clarifies the payload being zero here, i.e. it is similar to the non HTTP/HTTPS method described for the configurable health check above:
This extra health check does not affect the performance of your
application because it is not sending any data to your back-end
instances. You cannot disable or turn off this health check.
Summary / Solution
Assuming your RESTful API Server, with built-in HTTP parser is supposed to serve HTTP only indeed, you will need to handle two health checks:
The first one you configured yourself as a HTTP:port;/;PathToPing - you'll receive a HTTP GET request and must answer with 200 OK within the specified timeout period to be considered healthy.
The second one configured automatically by the service - it will open a TCP connection on the HTTP port configured above, won't send any data, and then closes the connection after the health check is completed.
In conclusion it seems that your server might be behaving perfectly fine already and you are just irritated by the 2nd health check's behavior - does ELB actually consider your server to be unhealthy?
As far as I know it's just an HTTP GET request looking for a 200 OK http response.

Resources