Go API deployed on GKE throws SSL Error: Unable to verify the first certificate . Am I missing something? - go

Following is my Kubernetes configuration. The API deployed using this config works as expected when SSL verification is disabled by the client or when HTTP is used instead of HTTPS. But on enabling, it throws SSL Error: Unable to verify the first certificate. The SSL certificate files are added as Kubernetes secret and the API is exposed on port 8080.
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "test-config"
namespace: "default"
labels:
app: "test"
data:
ENV: "DEV"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "test"
namespace: "default"
labels:
app: "test"
spec:
replicas: 1
selector:
matchLabels:
app: "test"
template:
metadata:
labels:
app: "test"
spec:
containers:
- name: "test"
image: "gcr.io/test-project/test:latest"
env:
- name: "ENV"
valueFrom:
configMapKeyRef:
key: "ENV"
name: "test-config"
---
apiVersion: "extensions/v1beta1"
kind: "Ingress"
metadata:
name: "test-ingress"
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-static-ip"
labels:
app: "test"
spec:
tls:
- hosts:
- "test.myhost.com"
secretName: "test-ssl-certificate"
backend:
serviceName: "test-service-nodeport"
servicePort: 8080
rules:
- host: "test.myhost.com"
http:
paths:
- path: "/*"
backend:
serviceName: "test-service-nodeport"
servicePort: 8080
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "test-service-nodeport"
spec:
selector:
app: "test"
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: "NodePort"
Go server code
http.HandleFunc("/hello", HelloServer)
err := http.ListenAndServeTLS(":8080", "server.crt", "server.key", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}

A potential cause for this could be intermediate certificates were not installed on the server properly which caused a breakdown in the certificate chain. Even if it works in your browser, it may not be including all the public certificates in the chain needed for a cache-empty client to verify. Here are some preliminary troubleshooting steps:
Verify that your certificate chain is complete [https://certificatechain.io/ ]
Verify your server’s configuration [https://www.ssllabs.com/ssltest/ or https://www.sslshopper.com/ssl-checker.html ]
Look for this error:
This server's certificate chain is incomplete.
And this:
Chain issues.........Incomplete
If you encounter these issues, it means that the web server you are connecting to is misconfigured and did omit the intermediate certificate in the certificate chain it sent to you. Your server needs to serve not just the certificate for your domain, but also the intermediate certificates too.
Intermediate certificate should be installed on the server, along with the server certificate. Root certificates are embedded into the software applications, browsers and operating systems. The application serving the certificate has to send the complete chain, this means the server certificate itself and all the intermediates.
Refer stack post and combine the server certificate and intermediate certificate into a chained certificate for information.

Related

Cipher mismatch error while trying to access an app deployed in GKE as HTTPS Ingress

I am trying to deploy a springboot application running on 8080 port. My target is to have https protocol for custom subdomain with google managed-certificates.
here are my yamls.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-deployment
namespace: my-namespace
template:
metadata:
labels:
app: my-deployment
namespace: my-namespace
spec:
containers:
- name: app
image: gcr.io/PROJECT_ID/IMAGE:TAG
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
ephemeral-storage: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
ephemeral-storage: "512Mi"
cpu: "250m"
2.service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/backend-config: '{"default": "my-http-health-check"}'
spec:
selector:
app: my-deployment
namespace: my-namespace
type: NodePort
ports:
- port: 80
name: http
targetPort: http
protocol: TCP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-name-space
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ip
networking.gke.io/managed-certificates: my-cert
kubernetes.io/ingress.class: "gce"
labels:
app: my-ingress
spec:
rules:
- host: my-domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
name: http
I followed various documentation, most of them could help to make http work but, couldn't make https work and ends with error ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Looks like there is issue with "Global forwarding rule". Ports shows 443-443. What is the correct way to terminate the HTTPS traffic at loadbalancer and route it to backend app with http?
From the information provided, I can see that the "ManagedCertificate" object is missing, you need to create a yaml file with the following structure:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: my-cert
spec:
domains:
- <your-domain-name1>
- <your-domain-name2>
And then apply it with the command: kubectl apply -f file-name.yaml
Provisioning of the Google-managed certificate can take up to 60 minutes; you can check the status of the certificate using the following command: kubectl describe managedcertificate my-cert, wait for the status to be as "Active".
A few prerequisites you need to be aware, though:
You must own the domain name. The domain name must be no longer than
63 characters. You can use Google Domains or another registrar.
The cluster must have the HttpLoadBalancing add-on enabled.
Your "kubernetes.io/ingress.class" must be "gce".
You must apply Ingress and ManagedCertificate resources in the same
project and namespace.
Create a reserved (static) external IP address. Reserving a static IP
address guarantees that it remains yours, even if you delete the
Ingress. If you do not reserve an IP address, it might change,
requiring you to reconfigure your domain's DNS records.
Finally, you can take a look at the complete Google's guide on Creating an Ingress with a Google-managed certificate.

Deploying Spring Boot with http/2 enabled, in kubernetes with ingress and namesheap certificate

I want to deploy kubernetes with http/2 enabled in kubernetes cluster with namesheap certificate, but i have always this error :
io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f66617669636f6e2e69636f20485454502f312e310d0a486f73743a206261636b2d61646d696e2d68747470322e6361707461696e776f726b732e636f6d0d0a582d526571756573742d49443a2032393838326665623234633864333063346565373063363238623033666464390d0a582d5265616c2d49503a2035312e38332e3130362e3234380d0a582d466f727761726465642d466f723a2035312e38332e3130362e3234380d0a582d466f727761726465642d486f73743a206261636b2d61646d696e2d68747470322e6361707461696e776f726b732e636f6d0d0a582d466f727761726465642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d466f727761726465642d536368656d653a2068747470730d0a582d536368656d653a2068747470730d0a7365632d63682d75613a2022476f6f676c65204368726f6d65223b763d223933222c2022204e6f743b41204272616e64223b763d223939222c20224368726f6d69756d223b763d223933220d0a7365632d63682d75612d6d6f62696c653a203f300d0a557365722d4167656e743a204d6f7a696c6c612f352e30202857696e646f7773204e542031302e303b2057696e36343b2078363429204170706c655765624b69742f3533372e333620284b48544d4c2c206c696b65204765636b6f29204368726f6d652f39332e302e343537372e3832205361666172692f3533372e33360d0a7365632d63682d75612d706c6174666f726d3a202257696e646f7773220d0a4163636570743a20696d6167652f617669662c696d6167652f776562702c696d6167652f61706e672c696d6167652f7376672b786d6c2c696d6167652f2a2c2a2f2a3b713d302e380d0a5365632d46657463682d536974653a2073616d652d6f726967696e0d0a5365632d46657463682d4d6f64653a206e6f2d636f72730d0a5365632d46657463682d446573743a20696d6167650d0a526566657265723a2068747470733a2f2f6261636b2d61646d696e2d68747470322e6361707461696e776f726b732e636f6d2f676574416c6c44696374696f6e6172790d0a4163636570742d456e636f64696e673a20677a69702c206465666c6174652c2062720d0a4163636570742d4c616e67756167653a2066722d46522c66723b713d302e390d0a0d0a
My configs are :
-application.properties :
server.port=8443
server.http2.enabled=true
server.ssl.enabled=true
server.ssl.key-store=classpath:keystore/cert.p12
server.ssl.key-store-type=PKCS12
server.ssl.key-store-password=password
-Dockerfile :
FROM openjdk:11.0.8-slim
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.package.app"]
-ingress :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: example
namespace: http2
spec:
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: service-http2
servicePort: 8443
path: /
tls:
- hosts:
- domain.com
secretName: secret-tls
-cert : i have two files :
exemple.cert
exemple.ca-bundle
i used this command to convert my cert to .p12 :
OpenSSL pkcs12 -export -in cert.crt -inkey key.key -out cert.p12
-my.yaml file :
apiVersion: v1
kind: Service
metadata:
name: back-http2
namespace: http2
labels:
app: back-http2
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
- name: https
protocol: TCP
port: 8443
selector:
app: back-http2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: back-http2-deployment
namespace: http2
labels:
app: back-http2
spec:
replicas: 1
selector:
matchLabels:
app: back-http2
template:
metadata:
labels:
app: back-http2
spec:
containers:
- name: back-dev
image: docker/registry:back-http2
imagePullPolicy: Always
ports:
- name: http
protocol: TCP
containerPort: 8080
- name: https
protocol: TCP
containerPort: 8443
imagePullSecrets:
- name: secret
-versions:
spring boot : 2.4.2
kubernetes : 1.20.2
nginx ingress controller deployed with helm : ingress-nginx-4.0.3
Any help would be greatly appreciated! Thank you!
You need to configure TLS passthrough in the nginx ingress definition
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Make sure that the ingress itself is started supporting this flag, f.e.
args:
- --enable-ssl-passthrough
The reason behind this is that HTTP2 requires TLS by default.

Istio Gateway Fail To Connect Via HTTPS

Deployments in a GKE cluster with Istio is working correctly via HTTP. But when i tried to secure it with cert-manager with following resources, HTTPS request fails state like so on curl
`Immediate connect fail for 64:ff9b::2247:fd8a: Network is unreachable
* connect to 34.71.253.138 port 443 failed: Connection refused`.
What should i do to make it work with HTTPS as well.
ClusterIssuer with following configuration
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: iprocureservers#iprocu.re
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
# ACME DNS-01 provider configurations
- dns01:
# Google Cloud DNS
clouddns:
# Secret from the google service account key
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
# The project in which to update the DNS zone
project: iprocure-server
Certificate configuration like so, which made a certifiate in a Ready:True state
apiVersion: cert-manager.io/v1alpha3
kind: Certificate
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
secretName: letsencrypt-staging
commonName: "*.iprocure.tk"
dnsNames:
- '*.iprocure.tk'
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
And lastly a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: iprocure-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: letsencrypt-staging
If i do, kubectl describe certificate -n istio-system
Name: letsencrypt-staging
Namespace: istio-system
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-10-13T13:32:37Z
Generation: 1
Resource Version: 28030994
Self Link: /apis/cert-manager.io/v1/namespaces/istio-system/certificates/letsencrypt-staging
UID: ad838d28-5349-4aaa-a618-cc3bfc316e6e
Spec:
Common Name: *.iprocure.tk
Dns Names:
*.iprocure.tk
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging-clusterissuer
Secret Name: letsencrypt-staging-cert-secret
Status:
Conditions:
Last Transition Time: 2020-10-13T13:35:05Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2021-01-11T12:35:05Z
Not Before: 2020-10-13T12:35:05Z
Renewal Time: 2020-12-12T12:35:05Z
Revision: 1
Events: <none>
Running kubectl get certificates -o wide -n istio-system, yields
NAME READY SECRET ISSUER STATUS AGE
letsencrypt-staging True letsencrypt-staging-cert-secret letsencrypt-staging-clusterissuer Certificate is up to date and has not expired 17h
Issue
I assume that https wasn't working because of the requirements which have to be enabled if you want to use cert-menager with istio in older versions.
Solution
As #Yunus Einsteinium mentioned in comments
Thank you for guiding me in the right direction. Using the OSS Istio, not the GKE one, is the way to go! I managed to make HTTPS work!
So the solution here was to use OOS istio installed with istioctl instead of the older istio gke addon.

K8s + Istio + Firefox hard refresh. Accessing service cause 404 on another service, until other service accessed

Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears

Spring Boot, Minikube, Istio and Keycloak: "Invalid parameter: redirect_uri"

I have an application running in Minikube that works with the ingress-gateway as expected. A spring boot app is called, the view is displayed and a protected resource is called via a link. The call is forwarded to Keycloak and is authorized via the login mask and the protected resource is displayed as expected.
With Istio the redirecting fails with the message: "Invalid parameter: redirect_uri".
My Istio Gateway config
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio-system
name: istio-bomc-app-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
My virtualservice config
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-bomc-app-hrm-virtualservice
namespace: bomc-app
spec:
hosts:
- "*"
gateways:
- istio-bomc-app-gateway.istio-system.svc.cluster.local
http:
- match:
- uri:
prefix: /bomc-hrm
route:
- destination:
host: bomc-hrm-service.bomc-app.svc.cluster.local
port:
number: 80
After clicking the protected link, I get the following URI in the browser:
http://192.168.99.100:31380/auth/realms/bomc-hrm-realm/protocol/openid-connect/auth?response_type=code&client_id=bomc-hrm-app&redirect_uri=http%3A%2F%2F192.168.99.100%2Fbomc-hrm%2Fui%2Fcustomer%2Fcustomers&state=4739ab56-a8f3-4f78-bd29-c05e7ea7cdbe&login=true&scope=openid
I see the redirect_uri=http%3A%2F%2F192.168.99.100%2F is not complete. The port 31380 is missing.
How does Istio VirtualService need to be configured?
Have you checked the following command into Google Cloud
Maybe you will have clues using it
kubectl describe
Check Kube

Resources