Received plaintext http traffic on an https channel, closing connection - elasticsearch

I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error "502 Bad Gateway" error.
curl elasticsearch.dev.acme.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.
{"type": "server", "timestamp": "2021-04-27T13:12:20,048Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "elasticsearch-prod", "node.name": "elasticsearch-prod-es-default-0", "message": "received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}", "cluster.uuid": "t0mRfv7kREGQhXW9DVM3Vw", "node.id": "nCyAItDmSqGZRa3lApsC6g" }
Can you help me understand why this is occuring and how to fix it?
I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?
FYI i am able to port-forward the service and connect to it with curl using the -k flag.
What i have tried
I have tried adding my domain to the section as described here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san
I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.
curl -u "elastic:$PASSWORD" "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I have tried generating a certificate using the tool https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem
Then using the .crt and .key generated i created a kubectl secret elastic-tls-cert. But again curling localhost without -k gave the following error:
curl --cacert cacert.pem -u "elastic:$PASSWORD" -XGET "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
elasticsearch.yml
# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-prod
namespace: elastic-system
spec:
version: 7.12.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
xpack.security.enabled: true
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
nodeSelector:
acme/node-type: ops
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
storageClassName: elasticsearch
# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: NodePort
# tls:
# selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch.dev.acme.com
# - dns: localhost
# certificate:
# # provide your own certificate
# secretName: elastic-tls-cert
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0
resources
pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/elastic-operator-webhook ClusterIP 172.20.218.208 <none> 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator
service/elasticsearch-prod-es-default ClusterIP None <none> 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default
service/elasticsearch-prod-es-http NodePort 172.20.229.173 <none> 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
service/elasticsearch-prod-es-transport ClusterIP None <none> 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
aws alb ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <redacted>
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.dev.acme.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
# - host: kibana.dev.acme.com
# http:
# paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: kibana-prod-kb-http
# port:
# number: 5601

If anyone comes across this problem in the future, make sure your ingress is properly configured. The error message suggests that its a misconfiguration with the ingress.
received plaintext http traffic on an https channel, closing connection
In my case i am using aws-load-balancer-controller. I had to attach a annotation to my ingress that forces the connection to be HTTPS rather than HTTP.
alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
For my case this problem was fixed by setting the above annotation to my ingress file and it has nothing to do with setting up a custom/private TLS certificate.

solution for me: http => https

You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

Related

Cannot get elastic working via kubernetes ingress

I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.

Istio Gateway Fail To Connect Via HTTPS

Deployments in a GKE cluster with Istio is working correctly via HTTP. But when i tried to secure it with cert-manager with following resources, HTTPS request fails state like so on curl
`Immediate connect fail for 64:ff9b::2247:fd8a: Network is unreachable
* connect to 34.71.253.138 port 443 failed: Connection refused`.
What should i do to make it work with HTTPS as well.
ClusterIssuer with following configuration
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: iprocureservers#iprocu.re
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
# ACME DNS-01 provider configurations
- dns01:
# Google Cloud DNS
clouddns:
# Secret from the google service account key
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
# The project in which to update the DNS zone
project: iprocure-server
Certificate configuration like so, which made a certifiate in a Ready:True state
apiVersion: cert-manager.io/v1alpha3
kind: Certificate
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
secretName: letsencrypt-staging
commonName: "*.iprocure.tk"
dnsNames:
- '*.iprocure.tk'
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
And lastly a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: iprocure-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: letsencrypt-staging
If i do, kubectl describe certificate -n istio-system
Name: letsencrypt-staging
Namespace: istio-system
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-10-13T13:32:37Z
Generation: 1
Resource Version: 28030994
Self Link: /apis/cert-manager.io/v1/namespaces/istio-system/certificates/letsencrypt-staging
UID: ad838d28-5349-4aaa-a618-cc3bfc316e6e
Spec:
Common Name: *.iprocure.tk
Dns Names:
*.iprocure.tk
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging-clusterissuer
Secret Name: letsencrypt-staging-cert-secret
Status:
Conditions:
Last Transition Time: 2020-10-13T13:35:05Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2021-01-11T12:35:05Z
Not Before: 2020-10-13T12:35:05Z
Renewal Time: 2020-12-12T12:35:05Z
Revision: 1
Events: <none>
Running kubectl get certificates -o wide -n istio-system, yields
NAME READY SECRET ISSUER STATUS AGE
letsencrypt-staging True letsencrypt-staging-cert-secret letsencrypt-staging-clusterissuer Certificate is up to date and has not expired 17h
Issue
I assume that https wasn't working because of the requirements which have to be enabled if you want to use cert-menager with istio in older versions.
Solution
As #Yunus Einsteinium mentioned in comments
Thank you for guiding me in the right direction. Using the OSS Istio, not the GKE one, is the way to go! I managed to make HTTPS work!
So the solution here was to use OOS istio installed with istioctl instead of the older istio gke addon.

Access Elasticsearch from minikube/kubernetes

I have a spring boot application which is deployed in Kubernetes on local windows machine using minikube. I also have Elasticsearch running on my local machine (http://localhost:9200).
I want to call Elasticsearch REST endpoints from this spring boot app.
I tried solving this by creating a service without selector but not sure what am i missing.
When accessing the spring boot app using http://#minikube_ip#:#Node_Port#, i get an error "No route to host".
i tried doing minikube ssh and executing curl command, from there also i get the same error. Clearly I am missing something here.
application.yaml
elasticsearch:
hosts:
- http://my-es:80
connectTimeout: 10000
connectionRequestTimeout: 10000
socketTimeout: 10000
maxRetryTimeoutMillis: 60000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-es-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kube-es-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kube-es-app
spec:
containers:
- image: elastic-search-app:latest
imagePullPolicy: Never
name: kube-es-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: <MY_LOCAL_MACHINE_IP>
ports:
- port: 9200
Commands I executed
docker build -t elastic-search-app .
kubectl create -f deployment.yaml
kubectl expose deployment/kube-es-app --type="NodePort" --port 8080
Can anyone help please? I am stuck
If I've got the description right, the Windows machine should have vbox network adapter connected to the Host-only-network the Minikube VM is connected to.
Minikube can access the host machine directly because both are in the same network.
The Minikube is in charge of NAT-ting packages from Pods outside. What you need is to allow Elasticsearch to listen to the vbox- or all interfaces, and enable its port in the Windows firewall. Then the Elasticsearch should be available via IP address of Windows in the Host-only-network.
Apart from that, you might create a service (if you need go by name instead of IP) as discussed here:
Connect to local database from inside minikube cluster,
Minikube:Exposing mysql as a service on localhost.

HTTPS encryption is not active for my domain. My Order certificates is not completed

I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
I am performing the following steps in the order presented. I've wanted provide the most details as a possible of my process in order to understand the behavior presented.
Install the CustomResourceDefinition resources separately
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
Label the cert-manager namespace to disable resource validation
⟩ kubectl label namespace kube-system certmanager.k8s.io/disable-validation=true
namespace/kube-system labeled
[I]
Install the cert-manager Helm chart
⟩ helm install \
--name cert-manager \
--namespace kube-system \
--version v0.7.0 \
jetstack/cert-manager
I've confirmed the steps in this guide, in order to avoid possible problems, and all steps are ok ...
Creating my ingress
I am using kong-ingress-controller to manage the ingress process.
⟩ kubectl get pod,svc,deploy,replicaset -n kong | grep kong-ingress-controller
pod/kong-ingress-controller-667b4748d4-ccj8z 2/2 Running 14 95m
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 3d19h
deployment.extensions/kong-ingress-controller 1 1 1 1 3d19h
replicaset.extensions/kong-ingress-controller-667b4748d4 1 1 1 3d19h
This means that my external IP addres is given by kong-proxy and is 52.166.60.158
⟩ kubectl get svc -n kong | grep kong-proxy
kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 3d21h
I've created the ingress for first time of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
# namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" # Don't include it in order to use kong-ingress-controller
# add an annotation indicating the issuer to use.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
#- backend:
# serviceName: zcrm365dev
# servicePort: 80
# path: /
tls: # < placing a host in the TLS config will indicate a cert should be created
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: cert-manager-webhook-webhook-tls
# for the moment I've included this secret which was created by cert-manager installation
Apply it.
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
[I]
And my ingress is taking the kong-ingress-controller
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
cert-manager-webhook-webhook-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"cert-manager-webhook-webhook-tls"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 3m30s kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 3m28s kong-ingress-controller Ingress default/kong-ingress-zcrm365
[I]
Creating a ClusterIssuer
I am going to create a ClusterIssuer, I'd can create a Issuer, but I've started with a ClusterIssuer. What is the best alternative? This depen of our deployment and requirements to future, mostly in the namespace situations
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: b.garcia#possibilit.nl
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply it
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
clusterissuer.certmanager.k8s.io/letsencrypt-staging created
[I]
This ClusterIssuer was registered on ACME letsencrypt server
⟩ kubectl describe clusterissuers letsencrypt-staging
Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-staging"},"spec":{"acm...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-03-15T11:38:03Z
Generation: 1
Resource Version: 623999
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-staging
UID: cb48b391-4716-11e9-a113-e27267a7d354
Spec:
Acme:
Email: b.garcia#possibilit.nl
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8579841
Conditions:
Last Transition Time: 2019-03-15T11:38:05Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
[I]
Modifying my ingress resource created previously
Now that I've created our Let's Encrypt staging ClusterIssuer, I am ready to modify the Ingress Resource we created above and enable TLS encryption for the test1kongletsencrypt.possibilit.nl paths adding the following
I am going to add certmanager.k8s.io/cluster-issuer: letsencrypt-staging annotation and use the secret created with the letsencrypt-staging ClusterIssuer named letsencrypt-staging
Our ingress has been stayed of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
#namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" #new
# certmanager.k8s.io/acme-challenge-type: http01
# add an annotation indicating the issuer to use.
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: letsencrypt-staging # I've added this secret of letsencrypt cluster issuer
Apply it
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 configured
[I]
This process update on the ingress, create one ingres named cm-acme-http-solver-jr4fg
⟩ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-jr4fg test1kongletsencrypt.possibilit.nl 80 33s
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 56m
[I]
The detail of cm-acme-http-solver-jr4fg ingress is:
⟩ kubectl get ingress cm-acme-http-solver-jr4fg -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
creationTimestamp: "2019-03-15T12:10:57Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "4095675862"
certmanager.k8s.io/acme-http-token: "657526223"
name: cm-acme-http-solver-jr4fg
namespace: default
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: letsencrypt-staging-2613163196-0
uid: 638f1701-471b-11e9-a113-e27267a7d354
resourceVersion: "628284"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/cm-acme-http-solver-jr4fg
uid: 640ef483-471b-11e9-a113-e27267a7d354
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-svmvw
servicePort: 8089
path: /.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE
status:
loadBalancer:
ingress:
- ip: 52.166.60.158
And the detail of our kong-ingress-zcrm365 resource ingress is:
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
letsencrypt-staging terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/cluster-issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"letsencrypt-staging"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 60m kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 4m25s (x2 over 60m) kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal CreateCertificate 4m25s cert-manager Successfully created Certificate "letsencrypt-staging"
[I]
We can see that our ingress even using the kong-ingress-controller and the letsencrypt-staging certificate has been created in the default namespace:
⟩ kubectl get certificates
NAME
letsencrypt-staging
[I]
The letsencypt-staging certificate have the following detail
⟩ kubectl describe certificate letsencrypt-staging
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-15T12:10:55Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: kong-ingress-zcrm365
UID: 8643558f-4713-11e9-a113-e27267a7d354
Resource Version: 628164
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/letsencrypt-staging
UID: 62b3a31e-471b-11e9-a113-e27267a7d354
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Dns Names:
test1kongletsencrypt.possibilit.nl
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: letsencrypt-staging
Status:
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 7m24s cert-manager Generated new private key
Normal GenerateSelfSigned 7m24s cert-manager Generated temporary self signed certificate
Normal OrderCreated 7m23s cert-manager Created Order resource "letsencrypt-staging-2613163196"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
I can see that my order issue is not completed, only was created in the OrderCreated event, and this order already have 7 minutes since I've created this certificate and the order was not completed and by that reason the certificate is not issued successfully
Another thing that happens to me, is that the letsencrypt-staging secret created by the letsencrypt-staging cluster Issuer and their respective certificate, only have the tls.key:
⟩ kubectl describe secrets letsencrypt-staging -n kube-system
Name: letsencrypt-staging
Namespace: kube-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1675 bytes
[I]
According to I understand, is that if the letsencrypt certificate complete the order and the certificate is issued, in the letsencrypt-staging secret I would have one tls.crt key and maybe my letsencrypt-staging will be of tls type and not Opaque?
When I see the logs of my cert-manager pod I get the following output, I think that the http challenge is not executed:
I0315 12:10:57.833858 1 logger.go:103] Calling Discover
I0315 12:10:57.856136 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.923080 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.989596 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:57.989682 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:58.014803 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-jr4fg'
I0315 12:10:58.014842 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-jr4fg as it does not contain necessary annotations
I0315 12:10:58.014846 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-jr4fg"
I0315 12:10:58.015447 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 12:10:58.079504 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 12:10:58.079616 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I0315 12:10:58.079569 1 controller.go:184] orders controller: syncing item 'default/letsencrypt-staging-2613163196'
I get this message No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0"
According to this, I decide add the certmanager.k8s.io/acme-challenge-type: http01 annotation to my kong-ingress-zcrm365 ingress but nothing happened ... my ingress is updated, but nothing more.
All this process confirms that the TLS certificate was not successfully issued and HTTPS encryption is not active for my domains test1kongletsencrypt.possibilit.nl configured.
This make that my letsencrypt-staging certificate have a Status:False, and the order created event does not advance to completed to be issued.
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 51m cert-manager Generated new private key
Normal GenerateSelfSigned 51m cert-manager Generated temporary self signed certificate
Normal Cleanup 5m42s cert-manager Deleting old Order resource "letsencrypt-staging-2613163196"
Normal OrderCreated 5m42s cert-manager Created Order resource "letsencrypt-staging-2965106631"
Normal OrderCreated 39s (x2 over 51m) cert-manager Created Order resource "letsencrypt-staging-2613163196"
Normal Cleanup 39s cert-manager Deleting old Order resource "letsencrypt-staging-2965106631"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
How to can I to my certificate to be signed and succesfully issued by letsencrypt CA and active the https encryption active?
What is happening with these logs messages?
kubectl logs -n kube-system cert-manager-6f68b58796-q7txg
0315 13:06:11.027204 1 logger.go:103] Calling Discover
I0315 13:06:11.032299 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 13:06:11.046081 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 13:06:11.046109 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 13:06:21.046242 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I've heared that letsencrypt-staging environment only have test certificates and these are a kind of 'fake certificates' and maybe some clients like my chrome/firefox browser doesn’t trust certificate issuer ...
Is this a reason to I cannot enable https encryption on my domain?
In affirmative case, should I change from staging environment to production environment?
In this question some people talk about that but they emphasize:
that the staging environment should be used just to test that your client is working fine and can generate the challenges, certificates
In my case the http challenge is not generated still in staging environment. :(
here are the annotation I'm usually using for this:
"ingress.kubernetes.io/ssl-redirect": "true",
"certmanager.k8s.io/cluster-issuer": "letsencrypt-production",
# I'd suggest adding these 2 below
"kubernetes.io/tls-acme": "true",
"kubernetes.io/ingress.class": "nginx"
also, you didnt spot this error:
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I'm not sure what is wrong here exactly, your domain name should resolve to your ingress, you should be able to access yourdomain.name/.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE (this is lets encrypt validation response url, according to your logs)

How do I access this Kubernetes service via kubectl proxy?

I want to access my Grafana Kubernetes service via the kubectl proxy server, but for some reason it won't work even though I can make it work for other services. Given the below service definition, why is it not available on http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana?
grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: grafana
labels:
app: grafana
spec:
type: NodePort
ports:
- name: web
port: 3000
protocol: TCP
nodePort: 30902
selector:
app: grafana
grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: monitoring
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
volumeMounts:
- name: grafana-storage
mountPath: /var/grafana-storage
ports:
- name: web
containerPort: 3000
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: grafana-watcher
image: quay.io/coreos/grafana-watcher:v0.0.5
args:
- '--watch-dir=/var/grafana-dashboards'
- '--grafana-url=http://localhost:3000'
env:
- name: GRAFANA_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GRAFANA_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
resources:
requests:
memory: "16Mi"
cpu: "50m"
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: grafana-dashboards
mountPath: /var/grafana-dashboards
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-dashboards
configMap:
name: grafana-dashboards
The error I'm seeing when accessing the above URL is "no endpoints available for service "grafana"", error code 503.
With Kubernetes 1.10 the proxy URL should be slighly different, like this:
http://localhost:8080/api/v1/namespaces/default/services/SERVICE-NAME:PORT-NAME/proxy/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls
As Michael says, quite possibly your labels or namespaces are mismatching. However in addition to that, keep in mind that even when you fix the endpoint, the url you're after (http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana) might not work correctly.
Depending on your root_url and/or static_root_path grafana configuration settings, when trying to login you might get grafana trying to POST to http://localhost:8001/login and get a 404.
Try using kubectl port-forward instead:
kubectl -n monitoring port-forward [grafana-pod-name] 3000
then access grafana via http://localhost:3000/
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
The issue is that Grafana's port is named web, and as a result one needs to append :web to the kubectl proxy URL: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web.
An alternative, is to instead not name the Grafana port, because then you don't have to append :web to the kubectl proxy URL for the service: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web. I went with this option in the end since it's easier.
There are a few factors that might be causing this issue.
The service expects to find one or more supporting endpoints, which it discovers through matching rules on the labels. If the labels don't align, then the service won't find endpoints, and the network gateway function performed by the service will result in 503.
The port declared by the POD and the process within the container are misaligned from the --target-port expected by the service.
Either one of these might generate the error. Let's take a closer look.
First, kubectl describe the service:
$ kubectl describe svc grafana01-grafana-3000
Name: grafana01-grafana-3000
Namespace: default
Labels: app=grafana01-grafana
chart=grafana-0.3.7
component=grafana
heritage=Tiller
release=grafana01
Annotations: <none>
Selector: app=grafana01-grafana,component=grafana,release=grafana01
Type: NodePort
IP: 10.0.0.197
Port: <unset> 3000/TCP
NodePort: <unset> 30905/TCP
Endpoints: 10.1.45.69:3000
Session Affinity: None
Events: <none>
Notice that my grafana service has 1 endpoint listed (there could be multiple). The error above in your example indicates that you won't have endpoints listed here.
Endpoints: 10.1.45.69:3000
Let's take a look next at the selectors. In the example above, you can see I have 3 selector labels on my service:
Selector: app=grafana01-grafana,component=grafana,release=grafana01
I'll kubectl describe my pods next:
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
Namespace: default
Node: 10.10.25.220/10.10.25.220
Start Time: Fri, 14 Jul 2017 03:25:11 +0000
Labels: app=grafana01-grafana
component=grafana
pod-template-hash=1843344063
release=grafana01
...
Notice that the labels on the pod align correctly, hence my service finds pods which provide endpoints which are load balanced against by the service. Verify that this part of the chain isn't broken in your environment.
If you do find that the labels are correct, you may still have a disconnect in that the grafana process running within the container within the pod is running on a different port than you expect.
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
...
Containers:
grafana:
Container ID: docker://69f11b7828c01c5c3b395c008d88e8640c5606f4d865107bf4b433628cc36c76
Image: grafana/grafana:latest
Image ID: docker-pullable://grafana/grafana#sha256:11690015c430f2b08955e28c0e8ce7ce1c5883edfc521b68f3fb288e85578d26
Port: 3000/TCP
State: Running
Started: Fri, 14 Jul 2017 03:25:26 +0000
If for some reason, your port under the container listed a different value, then the service is effectively load balancing against an invalid endpoint.
For example, if it listed port 80:
Port: 80/TCP
Or was an empty value
Port:
Then even if your label selectors were correct, the service would never find a valid response from the pod and would remove the endpoint from the rotation.
I suspect your issue is the first problem above (mismatched label selectors).
If both the label selectors and ports align, then you might have a problem with the MTU setting between nodes. In some cases, if the MTU used by your networking layer (like calico) is larger than the MTU of the supporting network, then you'll never get a valid response from the endpoint. Typically, this last potential issue will manifest itself as a timeout rather than a 503 though.
Your Deployment may not have a label app: grafana, or be in another namespace. Could you also post the Deployment definition?

Resources