Kubernetes https ingress 400 response - https

I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).
I have an ingress in a different namespace that attempts to use the nginx controller.
#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on custom-http-headers then I get a 404 from nginx; my service is not being hit. According to re-write logging, the url is being re-written correctly.
I have also hit the service directly from inside the pod, and that works as well.
#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
What could be going wrong?
EDIT: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)

Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.

Related

Ingress rewrite rule in aks agic gives 502

I'm trying to create HTTPS ingress for my node.js authentication (auth) REST service in AKS, but I'm getting a 502 Bad Gateway response.
Here's my deployment and service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: auth
labels:
app: auth
spec:
selector:
matchLabels:
app: auth
replicas: 1
template:
metadata:
labels:
app: auth
spec:
imagePullSecrets:
- name: docker-hub-creds
containers:
- name: auth
image: ***image***
ports:
- containerPort: 80
name: auth
---
apiVersion: v1
kind: Service
metadata:
name: auth
namespace: auth
spec:
selector:
app: auth
ports:
- protocol: TCP
port: 80
targetPort: 80
I think that's all pretty basic and it seems to work ok. I can see the service running and if I expose a node-port then I can access it with no problems. The service responds to well-formed POST requests on the /auth path with a JWT.
I have configured an Azure Application Gateway following Microsoft's instructions, and following the troubleshooting guide leads me to believe that the installation has worked ok. I have also checked through the web-ui and there appear to be no errors. Finally, I worked through the support options and the automated analysis of my cluster found no major configuration issues.
Next, I tried to create an HTTPS ingress route for my service, and this is where it goes wrong. This is made more complicated by the dynamic generation of certificates for TLS.
The ingress definition looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/(auth/.*)
I have two rewrite-targets in there because I can't determine which one this ingress controller uses. All the example from the web use the nginx. prefix so I added it in desperation, despite thinking that it's probably not necessary.
Accessing the service through: ***hostname***/api/auth results in a Bad Gateway error.
I have checked through the portal and I can see the route is registered, listeners and rules are there, and my service is listed in the backend pools, but there is nothing in the 'rewrite' tabs. I expected to see something in the rewrite tabs.
I've tooled my service to log all access, and the logs show this, repeatedly:
{"level":30,"time":1611739355140,"pid":17,"hostname":"auth-6c7757bb89-d72td","msg":"Req-URL: /api/(auth/.*)"}
Describing the ingress gives me this:
Name: auth-in
Namespace: auth
Address: **redacted***
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
***redacted cert name** terminates **hostname***
Rules:
Host Path Backends
---- ---- --------
***hostname***
/api/(auth/.*) auth:80 10.0.0.69:80)
Annotations: cert-manager.io/acme-challenge-type: http01
cert-manager.io/cluster-issuer: letsencrypt-staging
ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: azure/application-gateway
nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 43m cert-manager Successfully created Certificate "***cert-name***"
Two things to note. 1st that the logs show that the URL isn't being rewritten -- it's being passed exactly as the path shows, including the regex part. 2nd, that the Default Backend entry in the ingress description shows an error. I'm not sure that the 2nd one matters, but the 1st is clearly wrong.
I am keen to discover how to diagnose the problem and then fix it.
Since you are using AGIC you can include Backend Path Prefix annotation appgw.ingress.kubernetes.io/backend-path-prefix: "/"
The Ingress will be like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/auth/*
AGIC on Nov 12 '21 has also included a rewrite-rule-set as part of this PR. For rewrite-rule, you can use the rewrite-rule annotation.
appgw.ingress.kubernetes.io/rewrite-rule-set: <rewrite rule set>

K8s + Istio + Firefox hard refresh. Accessing service cause 404 on another service, until other service accessed

Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears

Spring Boot, Minikube, Istio and Keycloak: "Invalid parameter: redirect_uri"

I have an application running in Minikube that works with the ingress-gateway as expected. A spring boot app is called, the view is displayed and a protected resource is called via a link. The call is forwarded to Keycloak and is authorized via the login mask and the protected resource is displayed as expected.
With Istio the redirecting fails with the message: "Invalid parameter: redirect_uri".
My Istio Gateway config
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio-system
name: istio-bomc-app-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
My virtualservice config
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-bomc-app-hrm-virtualservice
namespace: bomc-app
spec:
hosts:
- "*"
gateways:
- istio-bomc-app-gateway.istio-system.svc.cluster.local
http:
- match:
- uri:
prefix: /bomc-hrm
route:
- destination:
host: bomc-hrm-service.bomc-app.svc.cluster.local
port:
number: 80
After clicking the protected link, I get the following URI in the browser:
http://192.168.99.100:31380/auth/realms/bomc-hrm-realm/protocol/openid-connect/auth?response_type=code&client_id=bomc-hrm-app&redirect_uri=http%3A%2F%2F192.168.99.100%2Fbomc-hrm%2Fui%2Fcustomer%2Fcustomers&state=4739ab56-a8f3-4f78-bd29-c05e7ea7cdbe&login=true&scope=openid
I see the redirect_uri=http%3A%2F%2F192.168.99.100%2F is not complete. The port 31380 is missing.
How does Istio VirtualService need to be configured?
Have you checked the following command into Google Cloud
Maybe you will have clues using it
kubectl describe
Check Kube

https for eks loadbalancer

I want to secure my web application running on Kubernetes (EKS).
I have one front-end service .Front end service is running on port 80 .I want to run this on port 443 .When I kubectl get all .I see that my load balancer is running on port 443 , but I am not able to open it in the browser.
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234567890:certificate/12345c409-ec32-41a8-8542-712345678
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: 123456789.dkr.ecr.us-west-2.amazonaws.com/demoui:demo123
ports:
- containerPort: 80
env:
- name: MESSAGE
value: Hello Kubernetes!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200,404"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80} , {"HTTPS": 443}]'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello-kubernetes
servicePort: 80
AWS ALB Ingress Controller is designed to create application Load Balancer and relevant resources on AWS level within Ingress YAML configuration file. Actually, ALB Ingress controller parses configuration for the load balancer from the Ingress YAML definition file and then apply Target groups one per Kubernetes service with specified instances and NodePorts exposed on a particular nodes. On the top level Listeners expose connection port for Load Balancer and make decision for request routing according to defined routing rules as per official AWS ALB Ingress Controller Workflow documentation.
Just after a short theory tour, I have a few concerns about you current configuration:
First, I would recommend to check AWS ALB Ingress Controller
setup and inspect the relevant logs:
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
And then verify whether Load Balancer has been successfully generated within AWS console.
Inspect Target groups for particular ALB in order to ensure whether
health checks for k8s instances all are good.
Ensure, whether Security groups contain appropriate firewall rules for your instances in order to allow inbound and outbound network traffic across ALB.
I encourage you to get familiar with dedicated chapter about HTTP to HTTPS redirection in the official AWS ALB Ingress Controller documentation.
Here is what I have for my cluster to run on https.
In my ingress/Load balancer:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: CERT
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
# ports using the ssl certificate
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# which protocol a Pod speaks
In my Ingress controller, configMap of the nginx configuration:
app.kubernetes.io/force-ssl-redirect: "true"
Hope this works for you.
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/

Kubernetes with rewrite-target and kube-lego

I am trying to create redirect rule to GC buckets with my own certs. I have such configuration:
kind: Service
apiVersion: v1
metadata:
name: proxy-to-gcs
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ExternalName
externalName: storage.googleapis.com
----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-to-gcs
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: bucket_name/public
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- www.example.com
secretName: secret-name-tls
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: proxy-to-gcs
servicePort: 80
When I want to see www.example.com/.well-known/acme-challenge/ as kube-lego endpoint, I see google storage bucket 404 page. There is a problem in that rewrite-target, which doesn't consider existence of kube-lego. Any suggestions? Thanks.
If you want just to host a static website from a bucket, you can use the official doc as a how-to
For Ingress, you can use HTTP(S) Load Balancer - internal google cloud loadbalancer.
You can route your traffic from 2 URL to one bucket and have HTTPS on both.

Resources