GCE Load Balancer https and http routes - https

I have a load balancer that connected to a few backend instances in my cloud. Everything works great but the problem is that I need to put a specific sub domain route so the HTTPS protocol will work too.
Let's say I have xyz.com on PORT 80 and www.xyx.com on 443 - I want both of them to work the same.
I just created two incoming traffic IPs, the first one for the regular HTTP protocol and second for the HTTPs protocol. Each of them doing the job great but it's not automatically routing me to the right protocol when I'm accessing to the domain.
Any ideas or sources that can help me?

Related

How to handle HTTPS with spring-boot after google load balancer has been configured to handle https?

I have gotten the picture that if the google load balancer has been configured to handle HTTPS (by adding SSL certificate) that I don't need to have a ssl certificate on my compute engine instances. From my understanding the load balancer gets the secure request and then just forwards with http to an instance.
Now the frontend for the load balancer is configured for two ports. 8080 for regular HTTP protocol and 443 for HTTPS protocol. If I only want to handle HTTPS is setting the spring-boot application to listen to port 443 the only thing I have to do to make it work? Simply adding the following this to application.properties.
server.port = 443
Or is there more configuration needed from the spring part? I'm genuinely interested in learning this and have researched and tried reading up on this but I can't seem to find any good resources doing something similar. I get the feeling that a lot of the knowledge around these kind of problems is gotten through practical experiences.
If you want the Google load balancer to terminate HTTPS and forward HTTP to your backend services, simply configure the load balancer with a HTTP backend. If you're using a HTTPS backend you'll have to listen to and handle HTTPS traffic in your app.
The difference is if the traffic between the load balancer and your backend (inside GCP) is encrypted or not. usually HTTPS termination at the load balancer level is enough.

HTTP hole punching - Web server behind NAT

I have some Raspberry Pi servers behind NATs (non configurable, ISP provided), on dynamic IPs, and a "master" server with static IP and port forwarding configured on the router. I want to be able to access the page served by any of those RPi servers from any browser. I've read about TCP hole punching, but I can't figure out how to make it work from a browser (I guess using AJAX). I could use the "master" server as a relay server, but don't know how to. BTW, all traffic to/from the servers use HTTPS, not HTTP.
EDIT:
The Raspberries and the server are NOT on the same network.
You might investigate the use of a reverse proxy (I've used NGINX). A reverse proxy allows traffic to hit your server with the static IP, and forward HTTP traffic to other servers behind the firewall.
It gets a little more tricky with HTTPS, but it can be worked out.

GCE: Both TCP and HTTP load balancers on one IP

I'm running a kubernetes application on GKE, which serves HTTP requests on port 80 and websocket on port 8080.
Now, HTTP part needs to know client's IP address, so I have to use HTTP load balancer as ingress service. Websocket part then has to use TCP load balancer, as it's clearly stated in docs that HTTP LB doesn't support it.
I got them both working, but on different IPs, and I need to have them on one.
I would expect that there is something like iptables on GCE, so I could forward traffic from port 80 to HTTP LB, and from 8080 to TCP LB, but I can't find anything like that. Anything including forwarding allows only one them.
I guess I could have one instance with nginx/HAproxy doing only this, but that seems like an overkill
Appreciate any help!
There's not a great answer to this right now. Ingress objects are really HTTP only right now, and we don't really support multiple grades of ingress in a single cluster (though we want to).
GCE's HTTP LB doesn't do websockets yet.
Services have a flaw in that they lose the client IP (we are working on that). Even once we solve this, you won't be able to use GCE's L7 balancer because of the extra port you need.
The best workaround I can think of, and has been used by a number of users until we preserve source IP, is this:
Run your own haproxy or nginx or even your own app as a Daemonset on some or all nodes (label controlled) with HostPorts.
Run a GCE Network LB (outside of Kubernetes) pointing at the nodes with HostPorts.
Once we can properly preserve external IPs, you can turn this back into a plain Service.

in AWS, how to configure web server security group to only accept traffic from the load balancer in AWS?

how to configure web server SG to only accept traffic from the load balancer in AWS?
currently, my EC2 instance's security group has an inbound rule added like this:
Type Protocol Port Range Source
HTTP TCP 80 Anywhere 0.0.0.0/0
This works fine, though I am not sure if all my requests are intercepted by the load balancer (Elastic beanstalk). If I change the Source in inbound rules to point to the load balancer security group, it stops working.
What is the correct way to configure this so that web servers take requests only from the load balancer ?
Put the load balancer in security group a (say sg-a).
Put the instance in the same security group (or a different one) and allow traffic from sg-a on port 80.
Load balancers talk to the instance on its internal address which allows you to allow traffic from one security group to another.

Getting (non-HTTP) Client IP with load-balancer

Say I want to run something like the nyan cat telnet server (http://miku.acm.uiuc.edu/) and I need to handle 10,000 concurrent connections total. I have 10 servers in addition to a load balancer. Each server can handle 1,000 concurrent connections, and I want to put a load balancer in front of it to randomly divide the traffic to the 10 servers.
From what I've read, it's fairly simple for a load balancer to pass an HTTP request (along with the client IP) to the backend server, perhaps with FastCGI or with an X- header.
What would be the simplest way for the load balancer to pass the client IP to the backend server in this case with a simple TCP server? Would a hardware load balancer be needed, or are there ways to do this simply through software?
In other words, is there a uniform way to pass client IP when load balancing for non-HTTP stuff? The same way Google gets client IP when they load-balances Google Talk XMPP server or their Gmail IMAP server
This isn't for anything in specific; I'm just curious about if and how it can be done. Thanks in advance!
The simplest way would be for the load balancer to make itself completely invisible and pass the connection on with the source and destination IP address unmolested. For this to work, the same IP address must be assigned (as a loopback address, not to a physical interface) to all 10 servers and that would be the IP address the clients connect to. Internet traffic to that IP address has to go to the load balancer. The load balancer must be the default gateway for the servers.

Resources