Getting HTTPS working with Traefik and GCE Ingress - https

I'm after a very simple requirement - yet seems impossible to make Traefik redirect traffic from HTTP to HTTPS when behind an external load balancer.
This is my GCE ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: platform
name: public-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "kubernetes-cluster-dev-ip"
kubernetes.io/ingress.class: "gce"
ingress.gcp.kubernetes.io/pre-shared-cert: "application-dev-ssl,application-dev-graphql-ssl"
spec:
backend:
serviceName: traefik-ingress-service
servicePort: 80
Which receive traffic from HTTP(S) then forward to Traefik to port 80.
I initially tried to using Traefik way of redirecting matching the schema with this configuration:
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
But obviously gets into an infinite redirect loop because of the load balancer always proxy traffic to Traefik port 80.
The simple solution to make this work is exactly what GCE suggests
https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https
Being able to check for the http_x_forwarded_proto header and redirect based on that.
Nginx equivalent
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Can someone advice what's the best way of handling this with Traefik, please!

To recap, you have a GCE L7 (Layer 7) load balancer proxying another L7 load balancer in Traefik that you can potentially use it to proxy to another backend service. So looks like you have something like this:
GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTP 80
=> Infinite loop
and you need to have something like this:
GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTP 443
It's not documented anywhere if Traefik redirects to HTTPS based on the value of http_x_forwarded_proto being http, but that would be the general assumption. In any case, the Ingress doesn't know anything about an HTTPS backend (you didn't specify how you configured the HTTPS GCE LB endpoint).
You can see that it's documented here how to make the GCE LB directly create an HTTPS endpoint that forward directly to your HTTPS backend. Basically, you can try adding the service.alpha.kubernetes.io/app-protocols annotation to the HTTPS Traefik service:
apiVersion: v1
kind: Service
metadata:
name: traefik-https
annotations:
service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}'
labels:
app: echo
spec:
type: NodePort
ports:
- port: 443
protocol: TCP
name: my-https-port
selector:
app: traefik
So you would have something like this:
GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTPS service
=> Service forward to Traefik port 443
Hope this helps.

Related

Hosting Redis on EC2 - ConnectionTimeoutError

I have an EC2 instance behind a load balancer. The security group attached to it allows for inbound connections (both ipv4 and ipv6 on port 6379). I am able to connect to my redis client:
redis-cli -h ec2-**-**-**-*.us-west-1.compute.amazonaws.com -p 6379
However, when I try to connect with nodeJS and express-session I get a ConnectionTimeoutError on EC2, but locally it works fine:
const redisClient = createClient() // uses default port localhost:6379
redisClient.connect().catch(console.error)
If there is a race condition here, like others mentioned, why does this race condition happen on EC2 and not locally? Is the default localhost incorrect since there is a load balancer in front of the instance?
Based on your comments, I'd say the problem is the load balancer. Redis communicates on a protocol based on TCP. An ALB is only for HTTP/HTTPS traffic, so it cannot handle this protocol. Use a Network Load Balancer instead, with a TCP listener. Also make sure your security group rule also allows TCP traffic for port 6379.
Redis client should be instantiated explicitly in a setup like this one (covers both ipv4 and ipv6 inbound traffic):
createClient({ socket: { host: '127.0.0.1', port: 6379 }, legacyMode: true })
As redis is self-hosted on EC2 with a load balancer in front of the instance, localhost may not be mapped to 127.0.0.1 as a loopback address. This means that the default createClient() without a host or port specified, might try to establish a connection to a different internal, loopback address.
(Make sure to all inbound traffic to tcp 6379, or the port you are using)

HAPROXY: multiple domains, multiple ports, multiple server

I have two domains (abc.com, xyz.com), four ports (443, 8080, 80, 3000) and three servers (10.147.19.1, 10.147.19.2, 10.147.19.4).
I want to set up the Haproxy to achieve these type of routes:
abc.com:443 ---> 10.147.19.1:80
abc.com:8080 ---> 10.147.19.2:3000
xyz.com:443 ---> 10.147.19.1:80
xyz.com:8080 ---> 10.147.19.4:3000
From the docs, I can bind front end with Haproxy Maps. So I tried to bind port 443 with maps.
frontend port_443
bind *:443
use_backend %[req.hdr(host),lower,map_dom(/etc/haproxy/maps/port_443.map,be_default)]
Here is the map
# Domain backend
abc.com be_1
xyz.com be_1
The question, how to define the backend with the right ports? Please help!

Nginx HTTPS->HTTP gets 403 "SSL required" error with Spring

The problem
I'm getting 403 SSL required from Spring when trying to route through my ELB to Kubernetes Nginx ingress controller.
Setup
My set up is as follows:
I've got an ELB (AWS) with ACM for my Kubernetes cluster (created by kops) which routes all requests to the
Nginx Ingress Controller which in turn routes all requests according to the rules dictated in the
Ingress that passes the traffic unto the
Service that exposes port 80 and routes in to port 8080 in the
Pods selected with labels "app=foobar" (which are described in a Deployment)
Pods are running a Spring Boot Web App v2.1.3
So basically:
https://foo.bar.com(:443) -> ingress -> http://foo.bar.svc.cluster.local:80
This works like a charm for everything. Except SprintBoot.
For some reason, I keep getting 403 - SSL required from Spring
One note to keep in mind here: my Spring application does not have anything to do with SSL. I don't want it to do anything in that nature. For this example's purposes, this should be a regular REST API requests, with the SSL termination happening outside the container.
What I tried so far
Port-forwarding to the service itself and requesting - it works fine.
Disabling CSRF in WebSecurityConfigurerAdapter
Putting ingress annotation nginx.ingress.kubernetes.io/force-ssl-redirect=true - it gives out TOO_MANY_REDIRECTS error when I try it (instead of the 403)
Putting ingress annotation nginx.ingress.kubernetes.io/ssl-redirect=true - doesn't do anything
Putting ingress annotation nginx.ingress.kubernetes.io/enable-cors: "true" - doesn't do anything
Also nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Also nginx.ingress.kubernetes.io/secure-backends: "true"
Also kubernetes.io/tls-acme: "true"
I tried a whole bunch of other stuff that I can't really remember right now
How it all looks like in my cluster
Nginx ingress controller annotations look like this (I'm using the official nginx ingress controller helm chart, with very little modifications other than this thing):
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "aws_acm_certificate_arn"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foobar
namespace: api
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: foobar
servicePort: http
path: /
Service looks like this:
apiVersion: v1
kind: Service
metadata:
name: foobar
namespace: api
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: foobar
What I think the problem is
My hunch is that it's something with X-Forwarded headers, and Spring doing its magic behind the scenes, trying to be all smart like and deciding that I need SSL based on some headers without me explicitly asking for it. But I didn't figure it out yet.
I searched far and wide for a solution, but I couldn't find anything to ease my pain... hope you'll be able to help!
Edit
I found out that my current setup (without k8s and nginx) works fine and ELB passes X-Forwarded-Port: 443 and X-Forwarded-Proto: https, and it seems to work, but on my k8s cluster with nginx, I put in a listener client that spits out all the headers, and my headers seem to be X-Forwarded-Port: 80 and X-Forwarded-Proto: http
Thanks for all the people that helped out, I actually found the answer.
Within the code there were some validations that all requests should come from a secure source, and Nginx Ingress Controller changed these headers (X-Forwarded-Proto and X-Forwarded-Port) because SSL was terminated within ELB and handed to the ingress controller as HTTP
To fix that I did the following:
Added use-proxy-protocol: true to the config map - which passed the correct headers, but got errors regarding broken connection (which I don't really remember the actual error right now, I'll edit this answer later if there will be any requests for it)
To fix these errors I added the following the the nginx ingress controller annotations configuration:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
This made sure that all traffic will use the proxy protocol, and I also had to change the backend-protocol from HTTP to TCP.
Doing this made sure all requests routed ELB reserve their original X headers, and are passed unto Nginx Ingress Controller, which in turn passed unto my applications that require these headers to be passed.

Is there a way to provide custom value other than ClientIP for sessionAffinity in kubernetes?

First of all request goes to proxy service that i've implemented, service forwards request to pods randomly without using sessionAffinity. I want to send requests to same pod based on custom value that i've set in request parameters using post method. I've used sessionAffinity with my service yml.
Here's service yml with sessionAffinity:
apiVersion: v1
metadata:
name: abcd-service
namespace: ab-services
spec:
ports:
- name: http
protocol: TCP
port: ****
targetPort: ****
nodePort: *****
selector:
app: abcd-pod
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 600
type: NodePort
Now problem is that when request are send by multiple client's from same ip address all requests are directed to single pod and not to other replicas, causing uneven load balancing. But I don't want requests to be forwarded randomly either. i want all request's from same client or different client to be forwarded based on custom value that i set in post request and not by clientIP considering clientIP resolves to source ip of each request.
As you can read here, it currently supports only ClientIP and None values.
sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity.
Must be ClientIP or None. Defaults to None. More info:
https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
Unfortunatelly there are no other values allowed.

Not able to Access rest service deployed in EC2 from another EC2

I have a rest application running in EC2 instance(say win1). I have another EC2 instance(say win2) running in the same VPC as win1. I'm not able to access/call the rest URL exposed in win1 from win2. I have configured the security the group to allow http request in inbound.
Security group rule:
Inbound:
Protocol Port Range Source
HTTP (80) TCP 80 0.0.0.0/0
HTTPS (443) TCP 443 0.0.0.0/0
In outbound i have enabled all traffic. The rest API exposed by win1 is
http://<IP>:9090/ui
.
you exposed port 9090 by win1, but security group, you just allow inbound port : 80 and 443 only.
You need :
Inbound:
Protocol Port Range Source
HTTP (80) TCP 80 0.0.0.0/0
Custom TCP TCP 9090 0.0.0.0/0
HTTPS (443) TCP 443 0.0.0.0/0

Resources