Gwan load balancer setup - g-wan

I'm considering to use Gwan for a backend game server. Although Gwan can handle lot of requests, I would want to make it scalable automatically. Gwan has elastic load balancer. Are there examples on how should that be setup at code/deployment?

imo, load balancing is a function of the cloud or data center you are working with and not gwan.
in microsoft's azure which is good as it offers linux VMs you set up an endpoint (which is essentially a port like 8080) as a load-balanced endpoint that terminates to the port on each VM.
set up your gwan to port 8080.
set up a loadbalanced endpoint on port 8080.
point the loadbalancer to the gwan port of 8080.
clients then hold sessions with either vm1 or vm2.
auto-scaling is a function of azure's availability set.
im sure a similar process is offered on Amazon and Rackspace.

Related

Cloud Foundry container-to-container networking via https

We have two applications running on ibm cloud cloud foundry (appA and appB).
appA is accessing appB over a container-to-container networking while appB is also available externally over a Gorouter route.
The thing is that while it is http-8080 our app exposes - all is good.
Now we have to do container-to-container networking over https.
We configured the app to expose https-8080. 8080 is used as https://docs.cloudfoundry.org/devguide/custom-ports.html states that:
By default, apps only receive requests on port 8080 for both HTTP and TCP routing,
and so must be configured, or hardcoded, to listen on this port
container-to-container networking works as expected now using https.
But we are no longer able to use the appB over the external Gorouter route.
What is the best way to have it all up and running as we expect?
There isn't a good answer to this question, at least at the time I write this.
You do have a couple options though:
Manually set up HTTPS for the internal route. To do this, you would need to use the instructions for your application/server of choice to configure HTTPS. Then use whatever functionality your buildpack provides to inject this confirmation into the application container. This would also require you to bundle and push TLS certs with your application. The platform isn't going to provide you TLS certs if you take this option.
The trick to making both the internal and public route work is that you need your application to listen on both port 8080 and the port you choose for your HTTPS traffic. As long as you continue taking HTTP traffic on port 8080, then your public routes should keep working.
If you want a quick, but not ideal solution you can use port 61001. For newer versions of Cloud Foundry, this port is used by Envoy to accept traffic to your app over HTTPS. Envoy then proxies the request to your app via HTTP over port 8080. You can use this port for your container to container traffic as well, however the configured subject name on the TLS cert won't match your route.
Here's an example of what the subject name will look like.
subject: OU=organization:639f74aa-5d97-4a47-a6b3-e9c2613729d8 + OU=space:10180e2b-33b9-44ee-9f8f-da96da17ac1a + OU=app:10a4752e-be17-41f5-bfb2-d858d49165f2; CN=b7520259-6428-4a52-60d4-5f25
Because it's using this format, you would need to have your clients ignore certificate subject name match errors (not ideal as that weakens HTTPS), or perhaps create a custom hostname matcher.
For what it's worth, I don't think you want or need to change the port. That is typically used if your application is not flexible and unable to listen on port 8080. It changes the port for inbound traffic. Since you're only using C2C networking, you don't need that option.
What you want, from what I understand, is that you want HTTPS for C2C traffic. In that case, the public traffic doesn't matter. It can still go through Gorouter to port 8080. For your container-to-container traffic, you can pick any port you want. You just need to make sure the port you choose has network policy set to allow that traffic (by default all C2C traffic is blocked). Once the network policy is set, you can connect directly over whatever port you designate.

How to setup a websocket in Google Kubernetes Engine

How do I enable a port on Google Kubernetes Engine to accept websocket connections? Is there a way of doing so other than using an ingress controller?
Web sockets are supported by Google's global load balancer, so you can use a k8s Service of type LoadBalancer to expose such a service beyond your cluster.
Do be aware that load balancers created and managed outside Kubernetes in this way will have a default connection duration of 30 seconds, which interferes with web socket operation and will cause the connection to be closed frequently. This is almost useless for web sockets to be used effectively.
Until this issue is resolved, you will either need to modify this timeout parameter manually, or (recommended) consider using an in-cluster ingress controller (e.g. nginx) which affords you more control.
As per this article in the GCP documentation, there are 4 ways that you may expose a Service to external applications.
It can be exposed with a ClusterIP, a NodePort, a (TCP/UDP) Load Balancer, or an External Name.

GCE: Both TCP and HTTP load balancers on one IP

I'm running a kubernetes application on GKE, which serves HTTP requests on port 80 and websocket on port 8080.
Now, HTTP part needs to know client's IP address, so I have to use HTTP load balancer as ingress service. Websocket part then has to use TCP load balancer, as it's clearly stated in docs that HTTP LB doesn't support it.
I got them both working, but on different IPs, and I need to have them on one.
I would expect that there is something like iptables on GCE, so I could forward traffic from port 80 to HTTP LB, and from 8080 to TCP LB, but I can't find anything like that. Anything including forwarding allows only one them.
I guess I could have one instance with nginx/HAproxy doing only this, but that seems like an overkill
Appreciate any help!
There's not a great answer to this right now. Ingress objects are really HTTP only right now, and we don't really support multiple grades of ingress in a single cluster (though we want to).
GCE's HTTP LB doesn't do websockets yet.
Services have a flaw in that they lose the client IP (we are working on that). Even once we solve this, you won't be able to use GCE's L7 balancer because of the extra port you need.
The best workaround I can think of, and has been used by a number of users until we preserve source IP, is this:
Run your own haproxy or nginx or even your own app as a Daemonset on some or all nodes (label controlled) with HostPorts.
Run a GCE Network LB (outside of Kubernetes) pointing at the nodes with HostPorts.
Once we can properly preserve external IPs, you can turn this back into a plain Service.

in AWS, how to configure web server security group to only accept traffic from the load balancer in AWS?

how to configure web server SG to only accept traffic from the load balancer in AWS?
currently, my EC2 instance's security group has an inbound rule added like this:
Type Protocol Port Range Source
HTTP TCP 80 Anywhere 0.0.0.0/0
This works fine, though I am not sure if all my requests are intercepted by the load balancer (Elastic beanstalk). If I change the Source in inbound rules to point to the load balancer security group, it stops working.
What is the correct way to configure this so that web servers take requests only from the load balancer ?
Put the load balancer in security group a (say sg-a).
Put the instance in the same security group (or a different one) and allow traffic from sg-a on port 80.
Load balancers talk to the instance on its internal address which allows you to allow traffic from one security group to another.

Getting (non-HTTP) Client IP with load-balancer

Say I want to run something like the nyan cat telnet server (http://miku.acm.uiuc.edu/) and I need to handle 10,000 concurrent connections total. I have 10 servers in addition to a load balancer. Each server can handle 1,000 concurrent connections, and I want to put a load balancer in front of it to randomly divide the traffic to the 10 servers.
From what I've read, it's fairly simple for a load balancer to pass an HTTP request (along with the client IP) to the backend server, perhaps with FastCGI or with an X- header.
What would be the simplest way for the load balancer to pass the client IP to the backend server in this case with a simple TCP server? Would a hardware load balancer be needed, or are there ways to do this simply through software?
In other words, is there a uniform way to pass client IP when load balancing for non-HTTP stuff? The same way Google gets client IP when they load-balances Google Talk XMPP server or their Gmail IMAP server
This isn't for anything in specific; I'm just curious about if and how it can be done. Thanks in advance!
The simplest way would be for the load balancer to make itself completely invisible and pass the connection on with the source and destination IP address unmolested. For this to work, the same IP address must be assigned (as a loopback address, not to a physical interface) to all 10 servers and that would be the IP address the clients connect to. Internet traffic to that IP address has to go to the load balancer. The load balancer must be the default gateway for the servers.

Resources