Let say i have 1 GCP http load balancer and 2 webserver in backend.
Now if 100 user(browser) hit my service, I know that load balancer has 100 tcp connection open.
But will loadbalancer also open 100 tcp connection to my webserver ?
In short: Does GCP http load balancer do tcp connection pooling to my webservers or not?
Related
I want to forward all the HTTPS request sent to my ELB to my backend servers.
Is it possible that the ELB Load does not decrypt HTTPS requests before routing them to backend servers and leave the decryption process to be a responsibility of my backend server?, so I would not need to create an HTTPS listener?. I am using another proxy layer (Apigee) between the client and the ELB which provides an abstraction or facade for my backend service APIs and provides security, rate limiting, quotas, analytics, and already encrypted traffic is sent from it to the ELB.
Yes, you can set up an AWS ELB to be a plain TCP pass-through, so that your Apigee layer will handle the HTTPS handshake and decryption.
option_settings:
aws:elb:listener:443:
ListenerProtocol: TCP
InstancePort: 443
InstanceProtocol: TCP
from: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-tcp-passthrough.html
Is it possible to store the web socket connection and use that for distributed the load.
For Example - Similar to what we have in AWS ALB to distribute the traffic.
Hasura GraphQL Engine is deployed on a Cloudfoundry instance backed by AWS, it is exposed at a subdomain via an AWS ELB. The console is exposed at https://hasura.cloud.domain.com/console and the GraphQL API accepts queries at https://hasura.cloud.domain.com/v1alpha1/graphql.
But when a subscription is executed from console, an error happens with the following log on JS Console:
vendor.js:1 WebSocket connection to 'wss://hasura.cloud.domain.com/v1alpha1/graphql' failed: Error during WebSocket handshake: Unexpected response code: 200
Analyzing the websocket frames on Chrome indicates an error with (Opcode -1).
Basically, the client is unable to open a websocket connection.
Some load balancers do not support passing WebSocket handshake requests containing the Upgrade header to the CF router. For instance, the Amazon Web Services (AWS) Elastic Load Balancer (ELB) does not support this behavior. In this scenario, you must configure your load balancer to forward TCP traffic to your CF router to support WebSockets.
ref: https://docs.cloudfoundry.org/adminguide/supporting-websockets.html#config
Basically, there is some configuration required with AWS ELB and CF Router to get websockets working. This is typically done by setting up a non-standard port to forward all TCP connections to the CF Router. We have learned from our clients that this port is typically 4443.
So, to get websocket connections to work, choose the endpoint as wss://hasura.cloud.domain.com:4443/v1alpha1/graphql for websocket connections and thus subscriptions.
The console can be opened at https://hasura.cloud.domain.com:4443 as well.
I am working on a spring boot application.
I want to know how I can place load balancer in front of an application so that to distribute load across some number of servers.
I googled and found that there are some Netflix API like Eureka, Hystrix, Ribbon and Archaius that will help accomplish laod balancing job.
But could not found how these terminologies helps to distribute request and balance load at the same time provide high reliability and availability across all users accessing particular service.
I am going though all these but can not find out entry point to startup.
Actually I am not getting from where to start.
You can use HAProxy
You can run it on your server with your own configuration file, for example:
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
listen http-in
timeout client 180s
timeout server 180s
bind 127.0.0.1:80
server server1 157.166.226.27:8080 maxconn 32 check
server server2 157.166.226.28:8080 maxconn 32 check
server server3 157.166.226.29:8080 maxconn 32 check
server server4 157.166.226.30:8080 maxconn 32 check
server server5 157.166.226.31:8080 maxconn 32 check
server server6 157.166.226.32:8080 maxconn 32 check
This will allow every http request arriving on port 80 of local host to be distributed across listed servers, using round robin algorithm. For details, please see HAProxy documentation.
Understanding that your application is offering REST services I suggest you do not pursue looking into Netflix API. It is great but it will not help you for your use case. I suggest you have a look at ha-proxy, nginx or httpd for simple load balancing capabilities. Good part is that you don't have to look into session stickiness since REST is stateless per default.
how to configure web server SG to only accept traffic from the load balancer in AWS?
currently, my EC2 instance's security group has an inbound rule added like this:
Type Protocol Port Range Source
HTTP TCP 80 Anywhere 0.0.0.0/0
This works fine, though I am not sure if all my requests are intercepted by the load balancer (Elastic beanstalk). If I change the Source in inbound rules to point to the load balancer security group, it stops working.
What is the correct way to configure this so that web servers take requests only from the load balancer ?
Put the load balancer in security group a (say sg-a).
Put the instance in the same security group (or a different one) and allow traffic from sg-a on port 80.
Load balancers talk to the instance on its internal address which allows you to allow traffic from one security group to another.