502 bad gateway error of ALB & Istio when using https - https

I got 502 bad gateway error for https when using Istio and AWS ALB.
For some reason, I have to use ALB ingress before my Istio ingressgateway, and I also need to use https to connect from my ingress to istio ingressgateway. But I got the 502 bad gateway error. If I use http, it works fine.
I can find the following information in the logs of istio ingressgateway:
"response_code_details": "filter_chain_not_found"
Does someone have any idea?
The following is my Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
alb.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 443
tls:
- hosts:
- "my.hostname.com"
The following is my istio-ingressgateway
...
serviceAnnotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "30218"
service:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: status-port
nodePort: 30218
port: 15021
protocol: TCP
targetPort: 15021
...
The following is my Istio Gateway:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- my.hostname.com
port:
name: http
number: 80
protocol: HTTP
- hosts:
- my.hostname.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: my-tls-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
It works fine if I change the ingress to use http as following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 80
tls:
- hosts:
- "my.hostname.com"

ALB does not include the SNI extension when performing TLS handshake with your Istio Ingress Gateway. SNI is needed for route matching when you include 'host' in your Gateway resource.
This is documented here:
https://istio.io/latest/docs/ops/common-problems/network-issues/#configuring-sni-routing-when-not-sending-sni
The workaround is to configure the Istio 443 listener with hosts: '*', to avoid SNI matching, and then specify the host in the VirtualService resource for routing. You may also do some header matching in the VirtualService resource.
AWS have stated that including the SNI from client --> ALB connection is on their list, but their is no roadmap or ETA.
Also check these:
https://repost.aws/questions/QUOg0LUwafRFaorbsrYDP7xA/does-alb-send-sni-information-in-tls-handshake-to-a-back-end-server
https://kevin.burke.dev/kevin/amazons-albs-insecure-internal-traffic/

Related

Cannot get elastic working via kubernetes ingress

I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.

Use mulitple wildcard TLS certificates with single GCE loadbalancer

I'm trying to use two TLS certificates for two wildcard domains on single GCE loadbalancer ingress object. But It is giving me error that certificates could not found and it stops working on 443.
Sample Code:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: loadbalancer
spec:
tls:
- secretName: "tls-secret-1"
- secretName: "tls-secret-2"
rules:
- host: "*.domain1.com"
http:
paths:
- path: /*
backend:
serviceName: fe-svc
servicePort: 80
- host: "*.domain2.com"
http:
paths:
- path: /*
backend:
serviceName: fe2-svc
servicePort: 80
- path: /
backend:
serviceName: fe2-svc
servicePort: 80
Here is the sample code. Can anyone please provide me solution of it?
Thanks.

Proxy HTTP requests to external proxy with Istio egress gateway or ServiceEntry

I'm trying to proxy HTTP requests to external proxy with Istio egress gateway+ServiceEntry or ServiceEntry+VirtualService, adding a HTTP header before route request to external proxy but I don't find a similar example.
Is it possible ?
I've proved different configurations but they didn't work.
For example:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: proxy
namespace: lab
spec:
hosts:
- externalproxy
location: MESH_EXTERNAL
ports:
- number: 3128
name: http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: proxy-virtualservice
namespace: lab
spec:
hosts:
- myservice
http:
- match:
port: 3128
rewrite:
authority: externalproxy
route:
- destination:
host: externalproxy
port:
number: 3128
headers:
request:
add:
myheader: test
Thanks

K8s + Istio + Firefox hard refresh. Accessing service cause 404 on another service, until other service accessed

Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears

Kubernetes with rewrite-target and kube-lego

I am trying to create redirect rule to GC buckets with my own certs. I have such configuration:
kind: Service
apiVersion: v1
metadata:
name: proxy-to-gcs
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ExternalName
externalName: storage.googleapis.com
----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-to-gcs
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: bucket_name/public
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- www.example.com
secretName: secret-name-tls
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: proxy-to-gcs
servicePort: 80
When I want to see www.example.com/.well-known/acme-challenge/ as kube-lego endpoint, I see google storage bucket 404 page. There is a problem in that rewrite-target, which doesn't consider existence of kube-lego. Any suggestions? Thanks.
If you want just to host a static website from a bucket, you can use the official doc as a how-to
For Ingress, you can use HTTP(S) Load Balancer - internal google cloud loadbalancer.
You can route your traffic from 2 URL to one bucket and have HTTPS on both.

Resources