GKE ingress Https Redirect - FrontendConfig not recognized - https

I have an GKE ingress with both Http and Https. I want to redirect the traffic from port 80 to port 443.
I found this:
https://github.com/kubernetes/ingress-gce/issues/1075
which let to this:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect.
The proposed solution adds a FrontendConfig with a RedirectToHttps flag which uses some LoadBalancer functionality. Yet when I try to add the FrontendEndConfig, I get the following error:
error: unable to recognize "ssl.yaml": no matches for kind "FrontendConfig" in version "networking.gke.io/v1beta1"
I have also tried 'networking.gke.io/v1' and 'v1beta2'.
The latest GKE version available in my zone is 1.17.13-gke.2001. I have recently launched the cluster so although I don't know how to check the GKE version, I reckon it's running on the latest version.
Anyone has a clue why my kubectl doesn't recognize this kind?
Ingress yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
FrontendConfig: my-frontend-config
kubernetes.io/ingress.global-static-ip-name: 'web-static-ip'
networking.gke.io/managed-certificates: mycertificate
# kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: frontend
servicePort: 80
- path: /api/*
backend:
serviceName: backend
servicePort: 80
Redirect yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true

Thank you for pointing me in the right direction!
I had to upgrade the cluster as MrKoopaKiller indicated and also changed the annotation:
FrontendConfig: my-frontend-config
to:
networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config"
and it worked!
also: make sure you have:
kubernetes.io/ingress.allow-http: "true"

Related

Nginx ingress configuration for Kubernetes cluster hosted on windows

I am running Kubernetes cluster on my windows PC via Docker desktop. I am trying to create a very basic pod with a simple ingress configuration, but it doesn't seem to work. I thought the backend pod + service + ingress is a very basic setup, however I don't find a lot of help online. Please advise what I am doing wrong here.
My deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: test-cluster-ip
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 1234
targetPort: 80
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /testpath
pathType: Exact
backend:
service:
name: test-cluster-ip
port:
number: 1234
This is what I see when I access localhost from the browser
Also, I would like to ask if it is uncommon to run Kubernetes on windows even for testing (especially with ingress). I don't seem to find a lot of examples in the internet.
I thought the backend pod + service + ingress is a very basic setup, however I don't find a lot of help online. Please advise what I am doing wrong here.
It is indeed a very basic setup. And your k8s deployment/service/ingress yaml files are correct.
First, check if you installed NGINX ingress controller. If not, run:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
After that, you will be able to reach the k8s cluster using the following URL:
http://kubernetes.docker.internal/
But deploying ingress like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /testpath
pathType: Exact
backend:
service:
name: test-cluster-ip
port:
number: 1234
you are configuring the ingress to rewrite /testpath to the /. And requesting url without /testpath will return 404 status code.
See more rewrite examples here.
So, if you use the following URL, you will get the Nginx webpage from k8s deployment.
http://kubernetes.docker.internal/testpath

Ingress rewrite rule in aks agic gives 502

I'm trying to create HTTPS ingress for my node.js authentication (auth) REST service in AKS, but I'm getting a 502 Bad Gateway response.
Here's my deployment and service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: auth
labels:
app: auth
spec:
selector:
matchLabels:
app: auth
replicas: 1
template:
metadata:
labels:
app: auth
spec:
imagePullSecrets:
- name: docker-hub-creds
containers:
- name: auth
image: ***image***
ports:
- containerPort: 80
name: auth
---
apiVersion: v1
kind: Service
metadata:
name: auth
namespace: auth
spec:
selector:
app: auth
ports:
- protocol: TCP
port: 80
targetPort: 80
I think that's all pretty basic and it seems to work ok. I can see the service running and if I expose a node-port then I can access it with no problems. The service responds to well-formed POST requests on the /auth path with a JWT.
I have configured an Azure Application Gateway following Microsoft's instructions, and following the troubleshooting guide leads me to believe that the installation has worked ok. I have also checked through the web-ui and there appear to be no errors. Finally, I worked through the support options and the automated analysis of my cluster found no major configuration issues.
Next, I tried to create an HTTPS ingress route for my service, and this is where it goes wrong. This is made more complicated by the dynamic generation of certificates for TLS.
The ingress definition looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/(auth/.*)
I have two rewrite-targets in there because I can't determine which one this ingress controller uses. All the example from the web use the nginx. prefix so I added it in desperation, despite thinking that it's probably not necessary.
Accessing the service through: ***hostname***/api/auth results in a Bad Gateway error.
I have checked through the portal and I can see the route is registered, listeners and rules are there, and my service is listed in the backend pools, but there is nothing in the 'rewrite' tabs. I expected to see something in the rewrite tabs.
I've tooled my service to log all access, and the logs show this, repeatedly:
{"level":30,"time":1611739355140,"pid":17,"hostname":"auth-6c7757bb89-d72td","msg":"Req-URL: /api/(auth/.*)"}
Describing the ingress gives me this:
Name: auth-in
Namespace: auth
Address: **redacted***
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
***redacted cert name** terminates **hostname***
Rules:
Host Path Backends
---- ---- --------
***hostname***
/api/(auth/.*) auth:80 10.0.0.69:80)
Annotations: cert-manager.io/acme-challenge-type: http01
cert-manager.io/cluster-issuer: letsencrypt-staging
ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: azure/application-gateway
nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 43m cert-manager Successfully created Certificate "***cert-name***"
Two things to note. 1st that the logs show that the URL isn't being rewritten -- it's being passed exactly as the path shows, including the regex part. 2nd, that the Default Backend entry in the ingress description shows an error. I'm not sure that the 2nd one matters, but the 1st is clearly wrong.
I am keen to discover how to diagnose the problem and then fix it.
Since you are using AGIC you can include Backend Path Prefix annotation appgw.ingress.kubernetes.io/backend-path-prefix: "/"
The Ingress will be like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/auth/*
AGIC on Nov 12 '21 has also included a rewrite-rule-set as part of this PR. For rewrite-rule, you can use the rewrite-rule annotation.
appgw.ingress.kubernetes.io/rewrite-rule-set: <rewrite rule set>

Kubernetes spring boot service does work inside the cluster but get's white label 404 error outside

I have this spring boot app container in a pod and a service mapped to access the app, inside a minikube cluster. when I use an exec command and try to access API endpoints it does work fine. But after I exposed it using an Nginx ingress controller it shows Whitelabel 404 error for every request.
I did add the ingress minikube addon and configured the ingress controller using a yaml file.
here's the ingress.yaml file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /path/.*
backend:
serviceName: spring-app
servicePort: 8080
Any tips on how to solve this?? Thanks in advance
This should fix it:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /path/(.*)
backend:
serviceName: spring-app
servicePort: 8080
For more information, please check here.

upstream connect error or disconnect/reset before headers. reset reason: connection failure. Spring Boot and java 11

I'm having a problem migrating my pure Kubernetes app to an Istio managed. I'm using Google Cloud Platform (GCP), Istio 1.4, Google Kubernetes Engine (GKE), Spring Boot and JAVA 11.
I had the containers running in a pure GKE environment without a problem. Now I started the migration of my Kubernetes cluster to use Istio. Since then I'm getting the following message when I try to access the exposed service.
upstream connect error or disconnect/reset before headers. reset reason: connection failure
This error message looks like a really generic. I found a lot of different problems, with the same error message, but no one was related to my problem.
Bellow the version of the Istio:
client version: 1.4.10
control plane version: 1.4.10-gke.5
data plane version: 1.4.10-gke.5 (2 proxies)
Bellow my yaml files:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
account: tree-guest
name: tree-guest-service-account
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tree-guest
service: tree-guest
name: tree-guest
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tree-guest
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tree-guest
version: v1
name: tree-guest-v1
spec:
replicas: 1
selector:
matchLabels:
app: tree-guest
version: v1
template:
metadata:
labels:
app: tree-guestaz
version: v1
spec:
containers:
- image: registry.hub.docker.com/victorsens/tree-quest:circle_ci_build_00923285-3c44-4955-8de1-ed578e23c5cf
imagePullPolicy: IfNotPresent
name: tree-guest
ports:
- containerPort: 8080
serviceAccount: tree-guest-service-account
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tree-guest-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tree-guest-virtual-service
spec:
hosts:
- "*"
gateways:
- tree-guest-gateway
http:
- match:
- uri:
prefix: /v1
route:
- destination:
host: tree-guest
port:
number: 8080
To apply the yaml file I used the following argument:
kubectl apply -f <(istioctl kube-inject -f ./tree-guest.yaml)
Below the result of the Istio proxy argument, after deploying the application:
istio-ingressgateway-6674cc989b-vwzqg.istio-system SYNCED SYNCED SYNCED SYNCED
istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5 tree-guest-v1-774bf84ddd-jkhsh.default SYNCED SYNCED SYNCED SYNCED istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5
If someone have a tip about what is going wrong, please let me know. I'm stuck in this problem for a couple of days.
Thanks.
As #Victor mentioned the problem here was the wrong yaml file.
I solve it. In my case the yaml file was wrong. I reviewed it and the problem now is solved. Thank you guys., – Victor
If you're looking for yaml samples I would suggest to take a look at istio github samples.
As 503 upstream connect error or disconnect/reset before headers. reset reason: connection failure occurs very often I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
I landed exactly here with exactly similar symptoms.
But in my case I had to
switch pod listen address from 172.0.0.1 to 0.0.0.0
which solved my issue

Kubernetes https ingress 400 response

I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).
I have an ingress in a different namespace that attempts to use the nginx controller.
#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on custom-http-headers then I get a 404 from nginx; my service is not being hit. According to re-write logging, the url is being re-written correctly.
I have also hit the service directly from inside the pod, and that works as well.
#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
What could be going wrong?
EDIT: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)
Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.

Resources