Hello My project has a total of 2 services.
It consists of a service composed of Spring MVC and a service composed of nodejs, and both services are using http.
I use Ingress as one domain. and I applied tls to this ingress and modified it to use https.
However, I am getting a 502 error. I think this is because Spring MVC and nodejs server use http.
Can't I solve it without applying https to two servers?
My Ingress is
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service-loadbalancing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: domain-address
http:
paths:
- path: /
backend:
serviceName: spring-service
servicePort: 9000
- path: /index
backend:
serviceName: node-service
servicePort: 9001
tls:
- hosts:
- domain-address
secretName: nginx-tls
THANK!!!
My ingress pod logs
172.17.0.1 - - [01/Apr/2020:02:10:28 +0000] "GET / HTTP/2.0" 200 4236 "https://mydomain/login?timeout=1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36" 18 0.007 [default-spring-service-9000] [] 172.17.0.8:1111
2020/04/01 02:10:28 [error] 6773#6773: *7390742 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: _, request: "GET /resources/aiaas/kr/css/reset.css HTTP/2.0", upstream: "http://172.17.0.9:1111/resources/aiaas/kr/css/reset.css", host: "mydomain:8891", referrer: "https://mydomain:8891/"
Related
I got 502 bad gateway error for https when using Istio and AWS ALB.
For some reason, I have to use ALB ingress before my Istio ingressgateway, and I also need to use https to connect from my ingress to istio ingressgateway. But I got the 502 bad gateway error. If I use http, it works fine.
I can find the following information in the logs of istio ingressgateway:
"response_code_details": "filter_chain_not_found"
Does someone have any idea?
The following is my Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
alb.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 443
tls:
- hosts:
- "my.hostname.com"
The following is my istio-ingressgateway
...
serviceAnnotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "30218"
service:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: status-port
nodePort: 30218
port: 15021
protocol: TCP
targetPort: 15021
...
The following is my Istio Gateway:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- my.hostname.com
port:
name: http
number: 80
protocol: HTTP
- hosts:
- my.hostname.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: my-tls-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
It works fine if I change the ingress to use http as following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 80
tls:
- hosts:
- "my.hostname.com"
ALB does not include the SNI extension when performing TLS handshake with your Istio Ingress Gateway. SNI is needed for route matching when you include 'host' in your Gateway resource.
This is documented here:
https://istio.io/latest/docs/ops/common-problems/network-issues/#configuring-sni-routing-when-not-sending-sni
The workaround is to configure the Istio 443 listener with hosts: '*', to avoid SNI matching, and then specify the host in the VirtualService resource for routing. You may also do some header matching in the VirtualService resource.
AWS have stated that including the SNI from client --> ALB connection is on their list, but their is no roadmap or ETA.
Also check these:
https://repost.aws/questions/QUOg0LUwafRFaorbsrYDP7xA/does-alb-send-sni-information-in-tls-handshake-to-a-back-end-server
https://kevin.burke.dev/kevin/amazons-albs-insecure-internal-traffic/
I am using oauth2 proxy to authenticate the user through google and then the authenticated user should connect to kibana which can be accessed via http://localhost:5601. However after authentication success (as mentioned in the 1st log), it gives 502 Bad gateway: There was a problem connecting to the upstream server.. Any ideas whats the problem here ?
The oauth2-proxy logs are looking like this:
10.20.51.169:5475- user#example.com[2022/05/10 11:12:40] [AuthSuccess] Authenticated via OAuth2: Session{email:user#example.com user:656549595959595 PreferredUsername: token:true id_token:true created:2022-05-10 11:12:40.385971851 +0000 UTC m=+2147.975924036 expires:2022-05-10 12:12:39.385971851 +0000 UTC m=+5746.975924036 refresh_token:true}
10.20.51.169:5475 - - [2022/05/10 11:12:40] kibana.sandbox.k8s.example.com GET - "/oauth2/callback?state=fefef5awef5aew:/&code=4/6a5wf650aw6f56we6f56aew6f5a60fwe56af5fa2ew6f0ef=email%20profile%20https://www.googleapis.com/auth/userinfo.profile%20https://www.googleapis.com/auth/userinfo.email%20openid&authuser=0&hd=example.com&prompt=consent" HTTP/1.1 "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36" 302 24 0.163
[2022/05/10 11:12:40] [error_page.go:93] Error proxying to upstream server: EOF
10.20.51.169:5475 - fawef-awef-awef-awef-FE - user#example.com [2022/05/10 11:12:40] kibana.sandbox.k8s.example.com GET / "/" HTTP/1.1 "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36" 502 2163 0.001
I am using ECK operator and the kibana.yml file is looking like this:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.2.0
http:
service:
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 3000
metadata:
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:***
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
tls:
selfSignedCertificate:
subjectAltNames:
- dns: kibana.sandbox.k8s.example.com
count: 1
elasticsearchRef:
name: kube-es
podTemplate:
spec:
containers:
- name: kibana
resources:
requests:
memory: 1Gi
cpu: 0.5
limits:
memory: 2.5Gi
cpu: 2
ports:
- containerPort: 5601
name: http
protocol: TCP
- name: kibana-proxy
image: 'quay.io/oauth2-proxy/oauth2-proxy:latest'
imagePullPolicy: IfNotPresent
args:
- --cookie-secret=sergawergawgr4agrgargrgarg=
- --client-id=872911544486-otlttds9nh9t6h2ifovba0kcd6sa3seb.apps.googleusercontent.com
- --client-secret=iijIIIIJIIE_EDEWQID_DQWDWQD
- --upstream=http://localhost:5601
- --email-domain=example.com
- --footer=-
- --http-address=http://:3000
- --redirect-url=https://kibana.sandbox.k8s.example.com/oauth2/callback
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
memory: 500Mi
requests:
cpu: 0.5
memory: 256Mi
Let me know if anything is needed. Thanks
I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.
Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears
I pushed my spring boot app with VUE frontend to Cloud Foundry. When I send request to static file (css/img/js) such as https://bd-contacts-gateway.cfapps.io/js/chunk-vendors.8b8c1d1d.js
I get
502 Bad Gateway: Registered endpoint failed to handle the request.
In my spring boot app I have static files in src/main/resources/js, src/main/resources/css, src/main/resources/img
This is my manifest.yml when pushing to CF
---
applications:
- name: bd-contacts-gateway
memory: 1024M
buildpack: staticfile_buildpack
routes:
- route: bd-contacts-gateway.cfapps.io
EDIT:
This is extract from cf logs when calling swagger-ui URL:
2019-09-11T11:34:04.60+0200 [RTR/10] OUT bd-contacts-gateway.cfapps.io
- [2019-09-11T09:34:04.491+0000] "GET /swagger-ui.html HTTP/1.1" 502 0 67 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.2 Safari/605.1.15" "10.10.66.115:63512" "10.10.148.61:61150" x_forwarded_for:"80.242.35.228, 10.10.66.115" x_forwarded_proto:"https" vcap_request_id:"6ccc526a-00c6-4a80-46c1-4c4eee6fcab7" response_time:0.115106066 app_id:"b5dc7fb4-a52a-4997-93cb-9ed40a158bcc" app_index:"0" x_b3_traceid:"3b50452335bafcf9" x_b3_spanid:"3b50452335bafcf9" x_b3_parentspanid:"-" b3:"3b50452335bafcf9-3b50452335bafcf9"
I found the solution. I had to add remote_ip_header and protocol_header in my application-cloud.yml properties. Like this:
server:
port: 8080
tomcat:
remote_ip_header: x-forwarded-for
protocol_header: x-forwarded-proto
eureka:
client:
serviceUrl:
defaultZone: http://bd-contacts-eureka.cfapps.io/eureka/
instance:
hostname: ${vcap.application.uris[0]}
nonSecurePort: 80