HTTPS encryption is not active for my domain. My Order certificates is not completed - https

I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
I am performing the following steps in the order presented. I've wanted provide the most details as a possible of my process in order to understand the behavior presented.
Install the CustomResourceDefinition resources separately
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
Label the cert-manager namespace to disable resource validation
⟩ kubectl label namespace kube-system certmanager.k8s.io/disable-validation=true
namespace/kube-system labeled
[I]
Install the cert-manager Helm chart
⟩ helm install \
--name cert-manager \
--namespace kube-system \
--version v0.7.0 \
jetstack/cert-manager
I've confirmed the steps in this guide, in order to avoid possible problems, and all steps are ok ...
Creating my ingress
I am using kong-ingress-controller to manage the ingress process.
⟩ kubectl get pod,svc,deploy,replicaset -n kong | grep kong-ingress-controller
pod/kong-ingress-controller-667b4748d4-ccj8z 2/2 Running 14 95m
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 3d19h
deployment.extensions/kong-ingress-controller 1 1 1 1 3d19h
replicaset.extensions/kong-ingress-controller-667b4748d4 1 1 1 3d19h
This means that my external IP addres is given by kong-proxy and is 52.166.60.158
⟩ kubectl get svc -n kong | grep kong-proxy
kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 3d21h
I've created the ingress for first time of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
# namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" # Don't include it in order to use kong-ingress-controller
# add an annotation indicating the issuer to use.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
#- backend:
# serviceName: zcrm365dev
# servicePort: 80
# path: /
tls: # < placing a host in the TLS config will indicate a cert should be created
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: cert-manager-webhook-webhook-tls
# for the moment I've included this secret which was created by cert-manager installation
Apply it.
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
[I]
And my ingress is taking the kong-ingress-controller
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
cert-manager-webhook-webhook-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"cert-manager-webhook-webhook-tls"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 3m30s kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 3m28s kong-ingress-controller Ingress default/kong-ingress-zcrm365
[I]
Creating a ClusterIssuer
I am going to create a ClusterIssuer, I'd can create a Issuer, but I've started with a ClusterIssuer. What is the best alternative? This depen of our deployment and requirements to future, mostly in the namespace situations
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: b.garcia#possibilit.nl
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply it
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
clusterissuer.certmanager.k8s.io/letsencrypt-staging created
[I]
This ClusterIssuer was registered on ACME letsencrypt server
⟩ kubectl describe clusterissuers letsencrypt-staging
Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-staging"},"spec":{"acm...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-03-15T11:38:03Z
Generation: 1
Resource Version: 623999
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-staging
UID: cb48b391-4716-11e9-a113-e27267a7d354
Spec:
Acme:
Email: b.garcia#possibilit.nl
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8579841
Conditions:
Last Transition Time: 2019-03-15T11:38:05Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
[I]
Modifying my ingress resource created previously
Now that I've created our Let's Encrypt staging ClusterIssuer, I am ready to modify the Ingress Resource we created above and enable TLS encryption for the test1kongletsencrypt.possibilit.nl paths adding the following
I am going to add certmanager.k8s.io/cluster-issuer: letsencrypt-staging annotation and use the secret created with the letsencrypt-staging ClusterIssuer named letsencrypt-staging
Our ingress has been stayed of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
#namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" #new
# certmanager.k8s.io/acme-challenge-type: http01
# add an annotation indicating the issuer to use.
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: letsencrypt-staging # I've added this secret of letsencrypt cluster issuer
Apply it
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 configured
[I]
This process update on the ingress, create one ingres named cm-acme-http-solver-jr4fg
⟩ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-jr4fg test1kongletsencrypt.possibilit.nl 80 33s
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 56m
[I]
The detail of cm-acme-http-solver-jr4fg ingress is:
⟩ kubectl get ingress cm-acme-http-solver-jr4fg -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
creationTimestamp: "2019-03-15T12:10:57Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "4095675862"
certmanager.k8s.io/acme-http-token: "657526223"
name: cm-acme-http-solver-jr4fg
namespace: default
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: letsencrypt-staging-2613163196-0
uid: 638f1701-471b-11e9-a113-e27267a7d354
resourceVersion: "628284"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/cm-acme-http-solver-jr4fg
uid: 640ef483-471b-11e9-a113-e27267a7d354
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-svmvw
servicePort: 8089
path: /.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE
status:
loadBalancer:
ingress:
- ip: 52.166.60.158
And the detail of our kong-ingress-zcrm365 resource ingress is:
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
letsencrypt-staging terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/cluster-issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"letsencrypt-staging"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 60m kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 4m25s (x2 over 60m) kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal CreateCertificate 4m25s cert-manager Successfully created Certificate "letsencrypt-staging"
[I]
We can see that our ingress even using the kong-ingress-controller and the letsencrypt-staging certificate has been created in the default namespace:
⟩ kubectl get certificates
NAME
letsencrypt-staging
[I]
The letsencypt-staging certificate have the following detail
⟩ kubectl describe certificate letsencrypt-staging
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-15T12:10:55Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: kong-ingress-zcrm365
UID: 8643558f-4713-11e9-a113-e27267a7d354
Resource Version: 628164
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/letsencrypt-staging
UID: 62b3a31e-471b-11e9-a113-e27267a7d354
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Dns Names:
test1kongletsencrypt.possibilit.nl
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: letsencrypt-staging
Status:
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 7m24s cert-manager Generated new private key
Normal GenerateSelfSigned 7m24s cert-manager Generated temporary self signed certificate
Normal OrderCreated 7m23s cert-manager Created Order resource "letsencrypt-staging-2613163196"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
I can see that my order issue is not completed, only was created in the OrderCreated event, and this order already have 7 minutes since I've created this certificate and the order was not completed and by that reason the certificate is not issued successfully
Another thing that happens to me, is that the letsencrypt-staging secret created by the letsencrypt-staging cluster Issuer and their respective certificate, only have the tls.key:
⟩ kubectl describe secrets letsencrypt-staging -n kube-system
Name: letsencrypt-staging
Namespace: kube-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1675 bytes
[I]
According to I understand, is that if the letsencrypt certificate complete the order and the certificate is issued, in the letsencrypt-staging secret I would have one tls.crt key and maybe my letsencrypt-staging will be of tls type and not Opaque?
When I see the logs of my cert-manager pod I get the following output, I think that the http challenge is not executed:
I0315 12:10:57.833858 1 logger.go:103] Calling Discover
I0315 12:10:57.856136 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.923080 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.989596 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:57.989682 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:58.014803 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-jr4fg'
I0315 12:10:58.014842 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-jr4fg as it does not contain necessary annotations
I0315 12:10:58.014846 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-jr4fg"
I0315 12:10:58.015447 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 12:10:58.079504 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 12:10:58.079616 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I0315 12:10:58.079569 1 controller.go:184] orders controller: syncing item 'default/letsencrypt-staging-2613163196'
I get this message No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0"
According to this, I decide add the certmanager.k8s.io/acme-challenge-type: http01 annotation to my kong-ingress-zcrm365 ingress but nothing happened ... my ingress is updated, but nothing more.
All this process confirms that the TLS certificate was not successfully issued and HTTPS encryption is not active for my domains test1kongletsencrypt.possibilit.nl configured.
This make that my letsencrypt-staging certificate have a Status:False, and the order created event does not advance to completed to be issued.
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 51m cert-manager Generated new private key
Normal GenerateSelfSigned 51m cert-manager Generated temporary self signed certificate
Normal Cleanup 5m42s cert-manager Deleting old Order resource "letsencrypt-staging-2613163196"
Normal OrderCreated 5m42s cert-manager Created Order resource "letsencrypt-staging-2965106631"
Normal OrderCreated 39s (x2 over 51m) cert-manager Created Order resource "letsencrypt-staging-2613163196"
Normal Cleanup 39s cert-manager Deleting old Order resource "letsencrypt-staging-2965106631"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
How to can I to my certificate to be signed and succesfully issued by letsencrypt CA and active the https encryption active?
What is happening with these logs messages?
kubectl logs -n kube-system cert-manager-6f68b58796-q7txg
0315 13:06:11.027204 1 logger.go:103] Calling Discover
I0315 13:06:11.032299 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 13:06:11.046081 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 13:06:11.046109 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 13:06:21.046242 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I've heared that letsencrypt-staging environment only have test certificates and these are a kind of 'fake certificates' and maybe some clients like my chrome/firefox browser doesn’t trust certificate issuer ...
Is this a reason to I cannot enable https encryption on my domain?
In affirmative case, should I change from staging environment to production environment?
In this question some people talk about that but they emphasize:
that the staging environment should be used just to test that your client is working fine and can generate the challenges, certificates
In my case the http challenge is not generated still in staging environment. :(

here are the annotation I'm usually using for this:
"ingress.kubernetes.io/ssl-redirect": "true",
"certmanager.k8s.io/cluster-issuer": "letsencrypt-production",
# I'd suggest adding these 2 below
"kubernetes.io/tls-acme": "true",
"kubernetes.io/ingress.class": "nginx"
also, you didnt spot this error:
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I'm not sure what is wrong here exactly, your domain name should resolve to your ingress, you should be able to access yourdomain.name/.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE (this is lets encrypt validation response url, according to your logs)

Related

Cipher mismatch error while trying to access an app deployed in GKE as HTTPS Ingress

I am trying to deploy a springboot application running on 8080 port. My target is to have https protocol for custom subdomain with google managed-certificates.
here are my yamls.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-deployment
namespace: my-namespace
template:
metadata:
labels:
app: my-deployment
namespace: my-namespace
spec:
containers:
- name: app
image: gcr.io/PROJECT_ID/IMAGE:TAG
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
ephemeral-storage: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
ephemeral-storage: "512Mi"
cpu: "250m"
2.service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/backend-config: '{"default": "my-http-health-check"}'
spec:
selector:
app: my-deployment
namespace: my-namespace
type: NodePort
ports:
- port: 80
name: http
targetPort: http
protocol: TCP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-name-space
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ip
networking.gke.io/managed-certificates: my-cert
kubernetes.io/ingress.class: "gce"
labels:
app: my-ingress
spec:
rules:
- host: my-domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
name: http
I followed various documentation, most of them could help to make http work but, couldn't make https work and ends with error ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Looks like there is issue with "Global forwarding rule". Ports shows 443-443. What is the correct way to terminate the HTTPS traffic at loadbalancer and route it to backend app with http?
From the information provided, I can see that the "ManagedCertificate" object is missing, you need to create a yaml file with the following structure:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: my-cert
spec:
domains:
- <your-domain-name1>
- <your-domain-name2>
And then apply it with the command: kubectl apply -f file-name.yaml
Provisioning of the Google-managed certificate can take up to 60 minutes; you can check the status of the certificate using the following command: kubectl describe managedcertificate my-cert, wait for the status to be as "Active".
A few prerequisites you need to be aware, though:
You must own the domain name. The domain name must be no longer than
63 characters. You can use Google Domains or another registrar.
The cluster must have the HttpLoadBalancing add-on enabled.
Your "kubernetes.io/ingress.class" must be "gce".
You must apply Ingress and ManagedCertificate resources in the same
project and namespace.
Create a reserved (static) external IP address. Reserving a static IP
address guarantees that it remains yours, even if you delete the
Ingress. If you do not reserve an IP address, it might change,
requiring you to reconfigure your domain's DNS records.
Finally, you can take a look at the complete Google's guide on Creating an Ingress with a Google-managed certificate.

Received plaintext http traffic on an https channel, closing connection

I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error "502 Bad Gateway" error.
curl elasticsearch.dev.acme.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.
{"type": "server", "timestamp": "2021-04-27T13:12:20,048Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "elasticsearch-prod", "node.name": "elasticsearch-prod-es-default-0", "message": "received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}", "cluster.uuid": "t0mRfv7kREGQhXW9DVM3Vw", "node.id": "nCyAItDmSqGZRa3lApsC6g" }
Can you help me understand why this is occuring and how to fix it?
I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?
FYI i am able to port-forward the service and connect to it with curl using the -k flag.
What i have tried
I have tried adding my domain to the section as described here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san
I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.
curl -u "elastic:$PASSWORD" "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I have tried generating a certificate using the tool https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem
Then using the .crt and .key generated i created a kubectl secret elastic-tls-cert. But again curling localhost without -k gave the following error:
curl --cacert cacert.pem -u "elastic:$PASSWORD" -XGET "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
elasticsearch.yml
# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-prod
namespace: elastic-system
spec:
version: 7.12.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
xpack.security.enabled: true
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
nodeSelector:
acme/node-type: ops
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
storageClassName: elasticsearch
# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: NodePort
# tls:
# selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch.dev.acme.com
# - dns: localhost
# certificate:
# # provide your own certificate
# secretName: elastic-tls-cert
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0
resources
pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/elastic-operator-webhook ClusterIP 172.20.218.208 <none> 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator
service/elasticsearch-prod-es-default ClusterIP None <none> 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default
service/elasticsearch-prod-es-http NodePort 172.20.229.173 <none> 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
service/elasticsearch-prod-es-transport ClusterIP None <none> 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
aws alb ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <redacted>
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.dev.acme.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
# - host: kibana.dev.acme.com
# http:
# paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: kibana-prod-kb-http
# port:
# number: 5601
If anyone comes across this problem in the future, make sure your ingress is properly configured. The error message suggests that its a misconfiguration with the ingress.
received plaintext http traffic on an https channel, closing connection
In my case i am using aws-load-balancer-controller. I had to attach a annotation to my ingress that forces the connection to be HTTPS rather than HTTP.
alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
For my case this problem was fixed by setting the above annotation to my ingress file and it has nothing to do with setting up a custom/private TLS certificate.
solution for me: http => https
You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

Ingress rewrite rule in aks agic gives 502

I'm trying to create HTTPS ingress for my node.js authentication (auth) REST service in AKS, but I'm getting a 502 Bad Gateway response.
Here's my deployment and service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: auth
labels:
app: auth
spec:
selector:
matchLabels:
app: auth
replicas: 1
template:
metadata:
labels:
app: auth
spec:
imagePullSecrets:
- name: docker-hub-creds
containers:
- name: auth
image: ***image***
ports:
- containerPort: 80
name: auth
---
apiVersion: v1
kind: Service
metadata:
name: auth
namespace: auth
spec:
selector:
app: auth
ports:
- protocol: TCP
port: 80
targetPort: 80
I think that's all pretty basic and it seems to work ok. I can see the service running and if I expose a node-port then I can access it with no problems. The service responds to well-formed POST requests on the /auth path with a JWT.
I have configured an Azure Application Gateway following Microsoft's instructions, and following the troubleshooting guide leads me to believe that the installation has worked ok. I have also checked through the web-ui and there appear to be no errors. Finally, I worked through the support options and the automated analysis of my cluster found no major configuration issues.
Next, I tried to create an HTTPS ingress route for my service, and this is where it goes wrong. This is made more complicated by the dynamic generation of certificates for TLS.
The ingress definition looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/(auth/.*)
I have two rewrite-targets in there because I can't determine which one this ingress controller uses. All the example from the web use the nginx. prefix so I added it in desperation, despite thinking that it's probably not necessary.
Accessing the service through: ***hostname***/api/auth results in a Bad Gateway error.
I have checked through the portal and I can see the route is registered, listeners and rules are there, and my service is listed in the backend pools, but there is nothing in the 'rewrite' tabs. I expected to see something in the rewrite tabs.
I've tooled my service to log all access, and the logs show this, repeatedly:
{"level":30,"time":1611739355140,"pid":17,"hostname":"auth-6c7757bb89-d72td","msg":"Req-URL: /api/(auth/.*)"}
Describing the ingress gives me this:
Name: auth-in
Namespace: auth
Address: **redacted***
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
***redacted cert name** terminates **hostname***
Rules:
Host Path Backends
---- ---- --------
***hostname***
/api/(auth/.*) auth:80 10.0.0.69:80)
Annotations: cert-manager.io/acme-challenge-type: http01
cert-manager.io/cluster-issuer: letsencrypt-staging
ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: azure/application-gateway
nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 43m cert-manager Successfully created Certificate "***cert-name***"
Two things to note. 1st that the logs show that the URL isn't being rewritten -- it's being passed exactly as the path shows, including the regex part. 2nd, that the Default Backend entry in the ingress description shows an error. I'm not sure that the 2nd one matters, but the 1st is clearly wrong.
I am keen to discover how to diagnose the problem and then fix it.
Since you are using AGIC you can include Backend Path Prefix annotation appgw.ingress.kubernetes.io/backend-path-prefix: "/"
The Ingress will be like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: auth-in
namespace: auth
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/acme-challenge-type: http01
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
tls:
- hosts:
- ***hostname***
secretName: ***secret***
rules:
- host: ***same hostname***
http:
paths:
- backend:
serviceName: auth
servicePort: 80
path: /api/auth/*
AGIC on Nov 12 '21 has also included a rewrite-rule-set as part of this PR. For rewrite-rule, you can use the rewrite-rule annotation.
appgw.ingress.kubernetes.io/rewrite-rule-set: <rewrite rule set>

Istio Gateway Fail To Connect Via HTTPS

Deployments in a GKE cluster with Istio is working correctly via HTTP. But when i tried to secure it with cert-manager with following resources, HTTPS request fails state like so on curl
`Immediate connect fail for 64:ff9b::2247:fd8a: Network is unreachable
* connect to 34.71.253.138 port 443 failed: Connection refused`.
What should i do to make it work with HTTPS as well.
ClusterIssuer with following configuration
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: iprocureservers#iprocu.re
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
# ACME DNS-01 provider configurations
- dns01:
# Google Cloud DNS
clouddns:
# Secret from the google service account key
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
# The project in which to update the DNS zone
project: iprocure-server
Certificate configuration like so, which made a certifiate in a Ready:True state
apiVersion: cert-manager.io/v1alpha3
kind: Certificate
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
secretName: letsencrypt-staging
commonName: "*.iprocure.tk"
dnsNames:
- '*.iprocure.tk'
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
And lastly a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: iprocure-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: letsencrypt-staging
If i do, kubectl describe certificate -n istio-system
Name: letsencrypt-staging
Namespace: istio-system
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-10-13T13:32:37Z
Generation: 1
Resource Version: 28030994
Self Link: /apis/cert-manager.io/v1/namespaces/istio-system/certificates/letsencrypt-staging
UID: ad838d28-5349-4aaa-a618-cc3bfc316e6e
Spec:
Common Name: *.iprocure.tk
Dns Names:
*.iprocure.tk
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging-clusterissuer
Secret Name: letsencrypt-staging-cert-secret
Status:
Conditions:
Last Transition Time: 2020-10-13T13:35:05Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2021-01-11T12:35:05Z
Not Before: 2020-10-13T12:35:05Z
Renewal Time: 2020-12-12T12:35:05Z
Revision: 1
Events: <none>
Running kubectl get certificates -o wide -n istio-system, yields
NAME READY SECRET ISSUER STATUS AGE
letsencrypt-staging True letsencrypt-staging-cert-secret letsencrypt-staging-clusterissuer Certificate is up to date and has not expired 17h
Issue
I assume that https wasn't working because of the requirements which have to be enabled if you want to use cert-menager with istio in older versions.
Solution
As #Yunus Einsteinium mentioned in comments
Thank you for guiding me in the right direction. Using the OSS Istio, not the GKE one, is the way to go! I managed to make HTTPS work!
So the solution here was to use OOS istio installed with istioctl instead of the older istio gke addon.

Fail to connect to kubectl from client-go - /serviceaccount/token: no such file

I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!

Resources