I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.
Related
I got 502 bad gateway error for https when using Istio and AWS ALB.
For some reason, I have to use ALB ingress before my Istio ingressgateway, and I also need to use https to connect from my ingress to istio ingressgateway. But I got the 502 bad gateway error. If I use http, it works fine.
I can find the following information in the logs of istio ingressgateway:
"response_code_details": "filter_chain_not_found"
Does someone have any idea?
The following is my Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
alb.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 443
tls:
- hosts:
- "my.hostname.com"
The following is my istio-ingressgateway
...
serviceAnnotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "30218"
service:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: status-port
nodePort: 30218
port: 15021
protocol: TCP
targetPort: 15021
...
The following is my Istio Gateway:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- my.hostname.com
port:
name: http
number: 80
protocol: HTTP
- hosts:
- my.hostname.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: my-tls-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
It works fine if I change the ingress to use http as following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 80
tls:
- hosts:
- "my.hostname.com"
ALB does not include the SNI extension when performing TLS handshake with your Istio Ingress Gateway. SNI is needed for route matching when you include 'host' in your Gateway resource.
This is documented here:
https://istio.io/latest/docs/ops/common-problems/network-issues/#configuring-sni-routing-when-not-sending-sni
The workaround is to configure the Istio 443 listener with hosts: '*', to avoid SNI matching, and then specify the host in the VirtualService resource for routing. You may also do some header matching in the VirtualService resource.
AWS have stated that including the SNI from client --> ALB connection is on their list, but their is no roadmap or ETA.
Also check these:
https://repost.aws/questions/QUOg0LUwafRFaorbsrYDP7xA/does-alb-send-sni-information-in-tls-handshake-to-a-back-end-server
https://kevin.burke.dev/kevin/amazons-albs-insecure-internal-traffic/
Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears
I want to enable https for my web app, hosted in GKE. I have a domain name, arindam.fr and DNS name is mentioned in Cloud DNS, and got NS for Type A.
I am getting error:
This site can’t be reached arindam.fr’s server IP address could not be found.
when accessing page: https://arindam.fr/
https://github.com/arindam-b/DNSissue/blob/master/3.png
https://github.com/arindam-b/DNSissue/blob/master/1.PNG "Cloud DNS"
My Deployment & Service yaml:
My ingress yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- arindam.fr
secretName: tls-staging-cert
rules:
- host: arindam.fr
http:
paths:
- path: /
backend:
serviceName: hello-app
servicePort: 8080
Before that I installed nginx controller and cert manager using helm:
helm install --name nginx-ingress stable/nginx-ingress
Domain's NS are mentioned in my domain registration, in namecheap.com
https://github.com/arindam-b/DNSissue/blob/master/2.PNG "NS Configuration"
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-app
spec:
replicas: 1
template:
metadata:
labels:
app: hello-app
track: stable
spec:
containers:
- name: hello-app
image: "eu.gcr.io/rcup-mza-dev/hello-app:latest"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-app
# type: LoadBalancer
Am I missing something?
It seems that you registar's configuration is not propagating correctly Google's nameservers, I just check it in the following link. I also found this guide for how to change NS in namecheap, take in mind that you need to select "custom DNS" option to specify Google's NS.
After your registar propagates correctly the nameservers, this could take between 24-72 hours, you will be able to reach your domain.
DNSSEC was turned off, so it was not properly propagating. After turning it on it works fine.
I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).
I have an ingress in a different namespace that attempts to use the nginx controller.
#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on custom-http-headers then I get a 404 from nginx; my service is not being hit. According to re-write logging, the url is being re-written correctly.
I have also hit the service directly from inside the pod, and that works as well.
#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
What could be going wrong?
EDIT: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)
Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.
I am trying to create redirect rule to GC buckets with my own certs. I have such configuration:
kind: Service
apiVersion: v1
metadata:
name: proxy-to-gcs
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ExternalName
externalName: storage.googleapis.com
----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-to-gcs
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: bucket_name/public
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- www.example.com
secretName: secret-name-tls
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: proxy-to-gcs
servicePort: 80
When I want to see www.example.com/.well-known/acme-challenge/ as kube-lego endpoint, I see google storage bucket 404 page. There is a problem in that rewrite-target, which doesn't consider existence of kube-lego. Any suggestions? Thanks.
If you want just to host a static website from a bucket, you can use the official doc as a how-to
For Ingress, you can use HTTP(S) Load Balancer - internal google cloud loadbalancer.
You can route your traffic from 2 URL to one bucket and have HTTPS on both.