Use mulitple wildcard TLS certificates with single GCE loadbalancer - gcp-load-balancer

I'm trying to use two TLS certificates for two wildcard domains on single GCE loadbalancer ingress object. But It is giving me error that certificates could not found and it stops working on 443.
Sample Code:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: loadbalancer
spec:
tls:
- secretName: "tls-secret-1"
- secretName: "tls-secret-2"
rules:
- host: "*.domain1.com"
http:
paths:
- path: /*
backend:
serviceName: fe-svc
servicePort: 80
- host: "*.domain2.com"
http:
paths:
- path: /*
backend:
serviceName: fe2-svc
servicePort: 80
- path: /
backend:
serviceName: fe2-svc
servicePort: 80
Here is the sample code. Can anyone please provide me solution of it?
Thanks.

Related

502 bad gateway error of ALB & Istio when using https

I got 502 bad gateway error for https when using Istio and AWS ALB.
For some reason, I have to use ALB ingress before my Istio ingressgateway, and I also need to use https to connect from my ingress to istio ingressgateway. But I got the 502 bad gateway error. If I use http, it works fine.
I can find the following information in the logs of istio ingressgateway:
"response_code_details": "filter_chain_not_found"
Does someone have any idea?
The following is my Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
alb.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 443
tls:
- hosts:
- "my.hostname.com"
The following is my istio-ingressgateway
...
serviceAnnotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "30218"
service:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: status-port
nodePort: 30218
port: 15021
protocol: TCP
targetPort: 15021
...
The following is my Istio Gateway:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- my.hostname.com
port:
name: http
number: 80
protocol: HTTP
- hosts:
- my.hostname.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: my-tls-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
It works fine if I change the ingress to use http as following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
alb.ingress.kubernetes.io/group.name: <group name>
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: <my arn>
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/subnets: <my subnet>
spec:
ingressClassName: alb
rules:
- host: "my.hostname.com"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: istio-ingressgateway
port:
number: 80
tls:
- hosts:
- "my.hostname.com"
ALB does not include the SNI extension when performing TLS handshake with your Istio Ingress Gateway. SNI is needed for route matching when you include 'host' in your Gateway resource.
This is documented here:
https://istio.io/latest/docs/ops/common-problems/network-issues/#configuring-sni-routing-when-not-sending-sni
The workaround is to configure the Istio 443 listener with hosts: '*', to avoid SNI matching, and then specify the host in the VirtualService resource for routing. You may also do some header matching in the VirtualService resource.
AWS have stated that including the SNI from client --> ALB connection is on their list, but their is no roadmap or ETA.
Also check these:
https://repost.aws/questions/QUOg0LUwafRFaorbsrYDP7xA/does-alb-send-sni-information-in-tls-handshake-to-a-back-end-server
https://kevin.burke.dev/kevin/amazons-albs-insecure-internal-traffic/

Websockets on AKS using GraphQL Not Connecting

I currently have an AKS cluster setup running a GraphQL server and normal nginx ingress. We're attempting to onboard GraphQL Subscriptions, which utilize Websockets. The URL that GraphQL uses for websockets is the same url that is used for GraphQL queries. We've tried adding proxy configuration to enable websocket ingress, but the connection is never established. Running the GraphQL server without Kubernetes is successful, so we think there is something kubernetes-specific going on here...has anyone had any success doing this? Relevant ingress config below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: web
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
spec:
tls:
- hosts:
- my.host
- my-api.host
secretName: tls-secret
rules:
- host: my.host
http:
paths:
- path: /graphql
backend:
serviceName: webapi
servicePort: 80
- path: /(.*)
backend:
serviceName: website
servicePort: 80
- host: my-api.host
http:
paths:
- backend:
serviceName: webapi
servicePort: 80
path: /(.*)
You might want to start from a bit less complex config like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: web
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- my.host
secretName: tls-secret
rules:
- host: my.host
http:
paths:
- path: /
backend:
serviceName: website
servicePort: 80
- path: /graphql
backend:
serviceName: webapi
servicePort: 80
I switched the config to one endpoint instead of two. Removed some config since NGINX handles websockets out of the box. I removed regexp. I added the tls-acme annotation. And also ssl-redirect. In summary I just made it a bit less complex. Get this up and running first and then start applying advanced config stuff like the timeouts you did.
Happy to hear any feedback on this!

nginx-ingress - https configuration - server IP address could not be found

I want to enable https for my web app, hosted in GKE. I have a domain name, arindam.fr and DNS name is mentioned in Cloud DNS, and got NS for Type A.
I am getting error:
This site can’t be reached arindam.fr’s server IP address could not be found.
when accessing page: https://arindam.fr/
https://github.com/arindam-b/DNSissue/blob/master/3.png
https://github.com/arindam-b/DNSissue/blob/master/1.PNG "Cloud DNS"
My Deployment & Service yaml:
My ingress yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- arindam.fr
secretName: tls-staging-cert
rules:
- host: arindam.fr
http:
paths:
- path: /
backend:
serviceName: hello-app
servicePort: 8080
Before that I installed nginx controller and cert manager using helm:
helm install --name nginx-ingress stable/nginx-ingress
Domain's NS are mentioned in my domain registration, in namecheap.com
https://github.com/arindam-b/DNSissue/blob/master/2.PNG "NS Configuration"
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-app
spec:
replicas: 1
template:
metadata:
labels:
app: hello-app
track: stable
spec:
containers:
- name: hello-app
image: "eu.gcr.io/rcup-mza-dev/hello-app:latest"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-app
# type: LoadBalancer
Am I missing something?
It seems that you registar's configuration is not propagating correctly Google's nameservers, I just check it in the following link. I also found this guide for how to change NS in namecheap, take in mind that you need to select "custom DNS" option to specify Google's NS.
After your registar propagates correctly the nameservers, this could take between 24-72 hours, you will be able to reach your domain.
DNSSEC was turned off, so it was not properly propagating. After turning it on it works fine.

Kubernetes with rewrite-target and kube-lego

I am trying to create redirect rule to GC buckets with my own certs. I have such configuration:
kind: Service
apiVersion: v1
metadata:
name: proxy-to-gcs
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ExternalName
externalName: storage.googleapis.com
----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-to-gcs
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: bucket_name/public
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- www.example.com
secretName: secret-name-tls
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: proxy-to-gcs
servicePort: 80
When I want to see www.example.com/.well-known/acme-challenge/ as kube-lego endpoint, I see google storage bucket 404 page. There is a problem in that rewrite-target, which doesn't consider existence of kube-lego. Any suggestions? Thanks.
If you want just to host a static website from a bucket, you can use the official doc as a how-to
For Ingress, you can use HTTP(S) Load Balancer - internal google cloud loadbalancer.
You can route your traffic from 2 URL to one bucket and have HTTPS on both.

How to get Kubernetes Ingress to terminate SSL and proxy to service?

I have a centos7 deployment with kubernetes on bare metal. Everything works great. However, i would like to get an Ingress working. so in brief what i want to do is to terminate the SSL from within the Ingress and have plain http between the ingress and my service. this is what i did:
1) i hack weave to allow hostNetwork
2) i have an ingress controller set up as per:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
k8s-app: nginx-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
role: edge-router
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --enable-ssl-passthrough
# - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
volumeMounts:
- name: tls-dhparam-vol
mountPath: /etc/nginx-ssl/dhparam
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
Note the DaemonSet and the nodeSelector. Also the hostNetwork = true so that my kubernetes nodes will open up 80 and 443 to listen for routing).
So i attempt to go to http://foo.bar.com and unsurprisingly, nothing. i just get the default backend - 404 page. i need the ingress rule....
3) so i create a Ingress rule like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hub
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.org/ssl-services: "hub"
spec:
tls:
- hosts:
- foo.bar.com
secretName: tls-dhparam
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: hub
servicePort: 8000
So it works great!... for http... when i go to my node at http://foo.bar.com i can access my service (hub) and can log on. however, as one has to log on it only makes sense to enforce https....
so my problem is that when i switch my browser over to https://foo.bar.com, i end up with a the default backend - 404 page.
looking at the cert presented in the above, i see that it is one created by kubernetes:
Kubernetes Ingress Controller Fake Certificate
Self-signed root certificate
checking my secrets:
$ kubectl -n ingress-nginx get secrets
NAME TYPE DATA AGE
default-token-kkd2j kubernetes.io/service-account-token 3 12m
nginx-ingress-serviceaccount-token-7f2sq kubernetes.io/service-account-token 3 12m
tls-dhparam Opaque 1 8m
what am i doing wrong?
The issue was that using a pem file didn't seem to work (and there was no noticeable error associated with it).
It worked when I switched over to a TLS cert/key via:
kubectl create secret tls tls-certificate --key my.key --cert my.cer
In your example, it looks like your Ingress doesn't explicitly declare metadata.namespace. If it is ending up in the default namespace while the tls-dhparam Secret is in the ingress-nginx namespace that would be the problem. The tls secrets for Ingresses should be in the same namespace as the Ingress.

Resources