Nginx HTTPS->HTTP gets 403 "SSL required" error with Spring - spring

The problem
I'm getting 403 SSL required from Spring when trying to route through my ELB to Kubernetes Nginx ingress controller.
Setup
My set up is as follows:
I've got an ELB (AWS) with ACM for my Kubernetes cluster (created by kops) which routes all requests to the
Nginx Ingress Controller which in turn routes all requests according to the rules dictated in the
Ingress that passes the traffic unto the
Service that exposes port 80 and routes in to port 8080 in the
Pods selected with labels "app=foobar" (which are described in a Deployment)
Pods are running a Spring Boot Web App v2.1.3
So basically:
https://foo.bar.com(:443) -> ingress -> http://foo.bar.svc.cluster.local:80
This works like a charm for everything. Except SprintBoot.
For some reason, I keep getting 403 - SSL required from Spring
One note to keep in mind here: my Spring application does not have anything to do with SSL. I don't want it to do anything in that nature. For this example's purposes, this should be a regular REST API requests, with the SSL termination happening outside the container.
What I tried so far
Port-forwarding to the service itself and requesting - it works fine.
Disabling CSRF in WebSecurityConfigurerAdapter
Putting ingress annotation nginx.ingress.kubernetes.io/force-ssl-redirect=true - it gives out TOO_MANY_REDIRECTS error when I try it (instead of the 403)
Putting ingress annotation nginx.ingress.kubernetes.io/ssl-redirect=true - doesn't do anything
Putting ingress annotation nginx.ingress.kubernetes.io/enable-cors: "true" - doesn't do anything
Also nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Also nginx.ingress.kubernetes.io/secure-backends: "true"
Also kubernetes.io/tls-acme: "true"
I tried a whole bunch of other stuff that I can't really remember right now
How it all looks like in my cluster
Nginx ingress controller annotations look like this (I'm using the official nginx ingress controller helm chart, with very little modifications other than this thing):
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "aws_acm_certificate_arn"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foobar
namespace: api
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: foobar
servicePort: http
path: /
Service looks like this:
apiVersion: v1
kind: Service
metadata:
name: foobar
namespace: api
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: foobar
What I think the problem is
My hunch is that it's something with X-Forwarded headers, and Spring doing its magic behind the scenes, trying to be all smart like and deciding that I need SSL based on some headers without me explicitly asking for it. But I didn't figure it out yet.
I searched far and wide for a solution, but I couldn't find anything to ease my pain... hope you'll be able to help!
Edit
I found out that my current setup (without k8s and nginx) works fine and ELB passes X-Forwarded-Port: 443 and X-Forwarded-Proto: https, and it seems to work, but on my k8s cluster with nginx, I put in a listener client that spits out all the headers, and my headers seem to be X-Forwarded-Port: 80 and X-Forwarded-Proto: http

Thanks for all the people that helped out, I actually found the answer.
Within the code there were some validations that all requests should come from a secure source, and Nginx Ingress Controller changed these headers (X-Forwarded-Proto and X-Forwarded-Port) because SSL was terminated within ELB and handed to the ingress controller as HTTP
To fix that I did the following:
Added use-proxy-protocol: true to the config map - which passed the correct headers, but got errors regarding broken connection (which I don't really remember the actual error right now, I'll edit this answer later if there will be any requests for it)
To fix these errors I added the following the the nginx ingress controller annotations configuration:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
This made sure that all traffic will use the proxy protocol, and I also had to change the backend-protocol from HTTP to TCP.
Doing this made sure all requests routed ELB reserve their original X headers, and are passed unto Nginx Ingress Controller, which in turn passed unto my applications that require these headers to be passed.

Related

Service Discovery with Envoy

How does it work with Envoy?
Let's say I have configured an upstream cluster like this:
clusters:
-
name: "service_a_cluster"
connect_timeout: "0.25s"
type: "strict_dns"
lb_policy: "ROUND_ROBIN"
hosts:
-
socket_address:
address: "service_a"
port_value: 8786
How is my Envoy instance (ClusterManager?) going to resolve service_a?
To whom is it going to send DNS queries?
Envoy has internal mechanisms for doing resolution, and these are all available through configuration. It looks like you're using Envoy v2 apis, so the relevant high level config is in the cluster object here.
If you read that, you'll notice the hosts field references the type field. This type field tells envoy how to handle discovery/resolution. The full details of that mechanism is here.

How to connect to the gRPC service inside k8s cluster from outside gRPC client

I have a gRPC server running on port 9000 with gRPC-gateway running on port 9080.
I can make request to my service with postman using the following link: ```http://cluster1.example.com/api/v1/namespaces/default/services/my-service:9080/proxy
How can I connect to my service from gRPC client (on my local machine, which is outside of the cluster) using grpc.Dial()?
Example:
conn, err := grpc.Dial(...?, grpc.WithInsecure())
if err != nil {
panic(err)
}
Short answer:
This is mostly not a Golang question, it is a Kubernetes question. You have to set up the Kubernetes part and use it like ever before in Golang.
You can refer to #blackgreen's answer for a simple and temporary way.
Details
Kubernetes uses an overlay network, Flannel in most cases, the communication inside the cluster is set up by default, and it is isolated from outside.
Of cause there are some projects like Calico can connect the inside and outside network, but it another story.
There are several solutions if we want to access the pods from outside.
kubectl
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Kubectl uses socat to create a tunnel and forward one or more local ports to a pod.
The port forward will end when you stop the command, but it is a good choice if you want to temporarily access the pod for debugging.
kubectl port-forward redis-master-765d459796-258hz 7000:6379
Service
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Service is an abstract way to expose an application running on a set of Pods as a network service.
when accessing from outside, there are kinds of Service to use, NodePort may be a good choice in most case.
It uses iptables or ipvs to create a Port Forward in all Nodes forwarding network to the target port.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Ingress
https://kubernetes.io/docs/concepts/services-networking/ingress/
Ingress is a layer 7 proxy managing external network access to Service, as gRPC is also built on top of HTTP/2, Ingress work perfectly.
Ingress should be the choice if you are exposing a Production Application.
You should be able to connect to services in your k8s cluster from local with port forwarding:
kubectl port-forward --context <mycontext> -n <mynamespace> svc/my-service 9000:9000
And then you just pass the gRPC target into Dial with localhost and no scheme:
conn, err := grpc.Dial("localhost:9000", grpc.WithInsecure())
if err != nil {
panic(err)
}
I might state the obvious, but of course the server also must be started in insecure mode (no credentials), otherwise you might get Unavailable response code.

Is it possible to access Grafana and Prometheus through reverse proxy using Nginx on same server

Please is it possible to configure reverse proxy using nginx for Grafana and Prometheus on same server. I have configured Prometheus access through https(listening on port 443 and direct output to port 9090). This works fine but configuring Grafana which is on same server to be accessed through https has been impossible. I tried it listening on port 80 and direct its output to port 3000, but it always default to http port. I also tried another port for listening but never worked.
Has anyone done this before and please can you share your valuable experience. Thanks.
Maybe this docker compose can be helpful https://github.com/vegasbrianc/prometheus/blob/master/README.md
The suggestion is to move the ssl termination to any web server (NGinx, Traefik, HAProxy) and forward the request in plain text to the underline services (prometheus and grafana). Here some examples: HAProxy exposes prometheus and Traefik

Is there a way to provide custom value other than ClientIP for sessionAffinity in kubernetes?

First of all request goes to proxy service that i've implemented, service forwards request to pods randomly without using sessionAffinity. I want to send requests to same pod based on custom value that i've set in request parameters using post method. I've used sessionAffinity with my service yml.
Here's service yml with sessionAffinity:
apiVersion: v1
metadata:
name: abcd-service
namespace: ab-services
spec:
ports:
- name: http
protocol: TCP
port: ****
targetPort: ****
nodePort: *****
selector:
app: abcd-pod
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 600
type: NodePort
Now problem is that when request are send by multiple client's from same ip address all requests are directed to single pod and not to other replicas, causing uneven load balancing. But I don't want requests to be forwarded randomly either. i want all request's from same client or different client to be forwarded based on custom value that i set in post request and not by clientIP considering clientIP resolves to source ip of each request.
As you can read here, it currently supports only ClientIP and None values.
sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity.
Must be ClientIP or None. Defaults to None. More info:
https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
Unfortunatelly there are no other values allowed.

Conecting to GC Storage from a Pod running SpringBoot application

I've made a SpringBoot application that authenticate with Gloud Storage and performs action on it. It works locally, but when I deploy it on my GKE as a Pod, it suffers some errors.
I have a VPC environment Where I have a Google Cloud Storage, and a Kubernetes Cluster that will run some Spring Boot applications that performs actions on it through com.google.cloud.storage library.
It has Istio enabled for the Cluster and also a Gateway Resource with Secure HTTPS which targets the Ingress Load Balancer as defined here:
https://istio.io/docs/tasks/traffic-management/secure-ingress/sds/
Then my pods all are being reached through a Virtual Service of this Gateway, and it's working fine since they have the Istio-Car container on it and then I can reach them from outside.
So, I have configured this application in DEV environment to get the Credentials from the ENV values:
ENV GOOGLE_APPLICATION_CREDENTIALS="/app/service-account.json"
I know it's not safe, but just wanna make sure it's authenticating. And as I can see through the logs, it is.
As my code manipulates Storages, an Object of this type is needed, I get one by doing so:
this.storage = StorageOptions.getDefaultInstance().getService();
It works fine when running locally. But when I try the same on the Api now running inside the Pod container on GKE, whenever I try to make some interaction to the Storage it returns me some errors like:
[2019-04-25T03:17:40.040Z] [org.apache.juli.logging.DirectJDKLog] [http-nio-8080-exec-1] [175] [ERROR] transactionId=d781f21a-b741-42f0-84e2-60d59b4e1f0a Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.google.cloud.storage.StorageException: Remote host closed connection during handshake] with root cause
java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
...
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:994)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:142)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1011)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:358)
... 65 common frames omitted
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
...
Looks like when I make the call from the Pod, it is expected some extra Https configuration. I don't know right.
So what I'm wondering is:
If this is some kind of Firewall Rule blocking this call from my Pod to "outside" (What is weird since they run on the same network, or at least I thought so).
If it's because of the Gateway I defined that is kind of not enabling this Pod
Or if I need to create the Storage Object using some custom HTTP configurations as can be seen on this reference:
https://googleapis.github.io/google-cloud-java/google-cloud-clients/apidocs/com/google/cloud/storage/StorageOptions.html#getDefaultHttpTransportOptions--
My knowledge of HTTPs and Secure conections is not very good, so maybe my lacking on concept on this area is making me not be able to see something obvious.
If some one have any idea on what maybe causing this, I would appreciate very much.
Solved it. It was really Istio.
I didn't know that we need a Service Entry resource to define what inbound and outbound calls OUTSIDE the mesh.
So, even that GCS is in the same project of the GKE, they are threated as completely separated services.
Just had to create it and everything worked fine:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
namespace: {{ cloud_app_namespace }}
name: external-google-api
spec:
hosts:
- "www.googleapis.com"
- "metadata.google.internal"
- "storage.googleapis.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
- number: 80
name: http
protocol: HTTP
https://istio.io/docs/reference/config/networking/v1alpha3/service-entry/
EDIT
I have disabled the Istio Injection on the namespace I were deploying the applications, by simply using:
kubectl label namespace default istio-injection=disabled --overwrite
Then redeployed the application and tried a curl there, and it worked fine.
My doubt now is: I though Istio only intercept on it's gateway layer, and after that the message keeps untouched, but this is not what seems to be working. Apparently he embbed some SSL layer on the request that my application doesn't do/require.
So sould I need to change my application just to fit on the service mesh requirements?

Resources